Feb 26 21:55:13 crc systemd[1]: Starting Kubernetes Kubelet... Feb 26 21:55:13 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 21:55:13 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 21:55:14 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 26 21:55:15 crc kubenswrapper[4910]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 21:55:15 crc kubenswrapper[4910]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 26 21:55:15 crc kubenswrapper[4910]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 21:55:15 crc kubenswrapper[4910]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 21:55:15 crc kubenswrapper[4910]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 26 21:55:15 crc kubenswrapper[4910]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.602154 4910 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.612887 4910 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.612938 4910 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.612950 4910 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.612961 4910 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.612972 4910 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.612984 4910 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.612994 4910 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613004 4910 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613014 4910 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613025 4910 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613034 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613044 4910 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613055 4910 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613065 4910 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613074 4910 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613089 4910 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613101 4910 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613112 4910 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613122 4910 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613136 4910 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613151 4910 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613202 4910 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613214 4910 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613228 4910 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613240 4910 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613251 4910 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613261 4910 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613271 4910 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613281 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613291 4910 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613301 4910 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613321 4910 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613332 4910 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613341 4910 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613351 4910 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613362 4910 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613371 4910 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613381 4910 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613394 4910 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613407 4910 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613418 4910 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613430 4910 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613441 4910 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613450 4910 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613460 4910 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613470 4910 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613520 4910 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613530 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613540 4910 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613550 4910 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613559 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613569 4910 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613583 4910 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613593 4910 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613604 4910 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613614 4910 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613625 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613635 4910 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613644 4910 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613654 4910 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613665 4910 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613675 4910 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613685 4910 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613699 4910 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613708 4910 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613719 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613728 4910 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613739 4910 feature_gate.go:330] unrecognized feature gate: Example Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613750 4910 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613759 4910 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.613769 4910 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.613975 4910 flags.go:64] FLAG: --address="0.0.0.0" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614003 4910 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614023 4910 flags.go:64] FLAG: --anonymous-auth="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614040 4910 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614055 4910 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614067 4910 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614082 4910 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614097 4910 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614109 4910 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614120 4910 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614134 4910 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614150 4910 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614199 4910 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614213 4910 flags.go:64] FLAG: --cgroup-root="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614224 4910 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614236 4910 flags.go:64] FLAG: --client-ca-file="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614247 4910 flags.go:64] FLAG: --cloud-config="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614259 4910 flags.go:64] FLAG: --cloud-provider="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614270 4910 flags.go:64] FLAG: --cluster-dns="[]" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614283 4910 flags.go:64] FLAG: --cluster-domain="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614296 4910 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614309 4910 flags.go:64] FLAG: --config-dir="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614321 4910 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614334 4910 flags.go:64] FLAG: --container-log-max-files="5" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614363 4910 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614376 4910 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614388 4910 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614400 4910 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614412 4910 flags.go:64] FLAG: --contention-profiling="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614424 4910 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614435 4910 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614448 4910 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614460 4910 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614473 4910 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614485 4910 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614497 4910 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614508 4910 flags.go:64] FLAG: --enable-load-reader="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614519 4910 flags.go:64] FLAG: --enable-server="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614531 4910 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614548 4910 flags.go:64] FLAG: --event-burst="100" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614559 4910 flags.go:64] FLAG: --event-qps="50" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614571 4910 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614582 4910 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614593 4910 flags.go:64] FLAG: --eviction-hard="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614606 4910 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614617 4910 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614628 4910 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614640 4910 flags.go:64] FLAG: --eviction-soft="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614651 4910 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614662 4910 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614674 4910 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614685 4910 flags.go:64] FLAG: --experimental-mounter-path="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614697 4910 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614709 4910 flags.go:64] FLAG: --fail-swap-on="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614720 4910 flags.go:64] FLAG: --feature-gates="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614734 4910 flags.go:64] FLAG: --file-check-frequency="20s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614746 4910 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614760 4910 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614773 4910 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614785 4910 flags.go:64] FLAG: --healthz-port="10248" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614797 4910 flags.go:64] FLAG: --help="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614811 4910 flags.go:64] FLAG: --hostname-override="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614823 4910 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614836 4910 flags.go:64] FLAG: --http-check-frequency="20s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614849 4910 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614860 4910 flags.go:64] FLAG: --image-credential-provider-config="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614871 4910 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614884 4910 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614896 4910 flags.go:64] FLAG: --image-service-endpoint="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614908 4910 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614919 4910 flags.go:64] FLAG: --kube-api-burst="100" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614931 4910 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614943 4910 flags.go:64] FLAG: --kube-api-qps="50" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614954 4910 flags.go:64] FLAG: --kube-reserved="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614966 4910 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614977 4910 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.614989 4910 flags.go:64] FLAG: --kubelet-cgroups="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615000 4910 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615012 4910 flags.go:64] FLAG: --lock-file="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615023 4910 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615035 4910 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615046 4910 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615064 4910 flags.go:64] FLAG: --log-json-split-stream="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615080 4910 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615091 4910 flags.go:64] FLAG: --log-text-split-stream="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615103 4910 flags.go:64] FLAG: --logging-format="text" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615114 4910 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615127 4910 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615138 4910 flags.go:64] FLAG: --manifest-url="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615150 4910 flags.go:64] FLAG: --manifest-url-header="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615207 4910 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615220 4910 flags.go:64] FLAG: --max-open-files="1000000" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615235 4910 flags.go:64] FLAG: --max-pods="110" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615247 4910 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615259 4910 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615272 4910 flags.go:64] FLAG: --memory-manager-policy="None" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615283 4910 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615295 4910 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615306 4910 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615319 4910 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615356 4910 flags.go:64] FLAG: --node-status-max-images="50" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615368 4910 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615380 4910 flags.go:64] FLAG: --oom-score-adj="-999" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615392 4910 flags.go:64] FLAG: --pod-cidr="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615403 4910 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615420 4910 flags.go:64] FLAG: --pod-manifest-path="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615430 4910 flags.go:64] FLAG: --pod-max-pids="-1" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615442 4910 flags.go:64] FLAG: --pods-per-core="0" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615453 4910 flags.go:64] FLAG: --port="10250" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615465 4910 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615477 4910 flags.go:64] FLAG: --provider-id="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615489 4910 flags.go:64] FLAG: --qos-reserved="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615501 4910 flags.go:64] FLAG: --read-only-port="10255" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615513 4910 flags.go:64] FLAG: --register-node="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615524 4910 flags.go:64] FLAG: --register-schedulable="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615536 4910 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615557 4910 flags.go:64] FLAG: --registry-burst="10" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615568 4910 flags.go:64] FLAG: --registry-qps="5" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615580 4910 flags.go:64] FLAG: --reserved-cpus="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615594 4910 flags.go:64] FLAG: --reserved-memory="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615608 4910 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615620 4910 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615632 4910 flags.go:64] FLAG: --rotate-certificates="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615643 4910 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615655 4910 flags.go:64] FLAG: --runonce="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615667 4910 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615680 4910 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615692 4910 flags.go:64] FLAG: --seccomp-default="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615704 4910 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615715 4910 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615727 4910 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615740 4910 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615752 4910 flags.go:64] FLAG: --storage-driver-password="root" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615763 4910 flags.go:64] FLAG: --storage-driver-secure="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615775 4910 flags.go:64] FLAG: --storage-driver-table="stats" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615787 4910 flags.go:64] FLAG: --storage-driver-user="root" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615799 4910 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615811 4910 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615822 4910 flags.go:64] FLAG: --system-cgroups="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615834 4910 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615852 4910 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615864 4910 flags.go:64] FLAG: --tls-cert-file="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615875 4910 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615889 4910 flags.go:64] FLAG: --tls-min-version="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615900 4910 flags.go:64] FLAG: --tls-private-key-file="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615912 4910 flags.go:64] FLAG: --topology-manager-policy="none" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615938 4910 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615951 4910 flags.go:64] FLAG: --topology-manager-scope="container" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615962 4910 flags.go:64] FLAG: --v="2" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615977 4910 flags.go:64] FLAG: --version="false" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.615992 4910 flags.go:64] FLAG: --vmodule="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.616005 4910 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.616018 4910 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616334 4910 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616353 4910 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616368 4910 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616379 4910 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616390 4910 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616400 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616415 4910 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616429 4910 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616442 4910 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616453 4910 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616464 4910 feature_gate.go:330] unrecognized feature gate: Example Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616476 4910 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616486 4910 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616496 4910 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616507 4910 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616518 4910 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616529 4910 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616538 4910 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616549 4910 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616559 4910 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616569 4910 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616579 4910 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616589 4910 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616599 4910 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616610 4910 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616625 4910 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616635 4910 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616645 4910 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616656 4910 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616666 4910 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616677 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616688 4910 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616698 4910 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616707 4910 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616717 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616728 4910 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616738 4910 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616748 4910 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616760 4910 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616770 4910 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616780 4910 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616791 4910 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616801 4910 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616819 4910 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616829 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616839 4910 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616849 4910 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616858 4910 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616871 4910 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616883 4910 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616895 4910 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616905 4910 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616917 4910 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616927 4910 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616937 4910 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616950 4910 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616963 4910 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616976 4910 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616988 4910 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.616998 4910 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617009 4910 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617019 4910 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617030 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617039 4910 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617049 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617062 4910 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617073 4910 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617083 4910 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617093 4910 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617103 4910 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.617113 4910 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.618219 4910 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.630981 4910 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.631036 4910 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631242 4910 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631268 4910 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631280 4910 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631291 4910 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631301 4910 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631310 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631318 4910 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631327 4910 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631335 4910 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631343 4910 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631351 4910 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631359 4910 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631367 4910 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631375 4910 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631382 4910 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631390 4910 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631398 4910 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631405 4910 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631413 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631420 4910 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631429 4910 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631436 4910 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631444 4910 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631452 4910 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631460 4910 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631469 4910 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631478 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631486 4910 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631494 4910 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631505 4910 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631513 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631524 4910 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631535 4910 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631545 4910 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631557 4910 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631565 4910 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631575 4910 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631585 4910 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631593 4910 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631602 4910 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631609 4910 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631617 4910 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631625 4910 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631633 4910 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631641 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631649 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631656 4910 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631667 4910 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631676 4910 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631685 4910 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631694 4910 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631702 4910 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631734 4910 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631742 4910 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631750 4910 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631758 4910 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631768 4910 feature_gate.go:330] unrecognized feature gate: Example Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631776 4910 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631783 4910 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631791 4910 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631799 4910 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631806 4910 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631814 4910 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631823 4910 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631831 4910 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631841 4910 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631849 4910 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631856 4910 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631864 4910 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631871 4910 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.631879 4910 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.631893 4910 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632131 4910 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632143 4910 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632151 4910 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632192 4910 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632201 4910 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632208 4910 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632217 4910 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632226 4910 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632234 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632242 4910 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632249 4910 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632257 4910 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632265 4910 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632275 4910 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632290 4910 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632304 4910 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632315 4910 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632326 4910 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632336 4910 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632346 4910 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632356 4910 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632365 4910 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632373 4910 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632380 4910 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632392 4910 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632403 4910 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632414 4910 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632426 4910 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632436 4910 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632448 4910 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632457 4910 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632466 4910 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632474 4910 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632482 4910 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632491 4910 feature_gate.go:330] unrecognized feature gate: Example Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632499 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632508 4910 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632518 4910 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632526 4910 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632534 4910 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632542 4910 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632550 4910 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632557 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632565 4910 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632572 4910 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632581 4910 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632590 4910 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632598 4910 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632605 4910 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632614 4910 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632622 4910 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632629 4910 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632637 4910 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632645 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632653 4910 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632661 4910 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632668 4910 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632676 4910 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632683 4910 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632693 4910 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632702 4910 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632712 4910 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632720 4910 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632729 4910 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632737 4910 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632746 4910 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632754 4910 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632762 4910 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632770 4910 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632777 4910 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.632785 4910 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.632798 4910 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.633077 4910 server.go:940] "Client rotation is on, will bootstrap in background" Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.638150 4910 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.642844 4910 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.643001 4910 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.646698 4910 server.go:997] "Starting client certificate rotation" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.646750 4910 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.647008 4910 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.680744 4910 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.683543 4910 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.688457 4910 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.706293 4910 log.go:25] "Validated CRI v1 runtime API" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.746092 4910 log.go:25] "Validated CRI v1 image API" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.751734 4910 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.758403 4910 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-26-21-49-42-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.758482 4910 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.785091 4910 manager.go:217] Machine: {Timestamp:2026-02-26 21:55:15.782368602 +0000 UTC m=+0.861859213 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5c941e7c-dc2b-467c-aace-fa09e4c41edd BootID:aee21706-93ed-49c2-8be6-5ac437ca1d73 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:04:a0:cb Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:04:a0:cb Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b9:11:f0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:fe:07:03 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f8:d1:59 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ba:b8:f2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:46:a8:c9:83:04:42 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7a:e8:5f:e3:f1:62 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.785667 4910 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.785839 4910 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.787057 4910 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.787437 4910 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.787506 4910 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.787940 4910 topology_manager.go:138] "Creating topology manager with none policy" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.787960 4910 container_manager_linux.go:303] "Creating device plugin manager" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.788516 4910 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.788574 4910 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.788807 4910 state_mem.go:36] "Initialized new in-memory state store" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.788943 4910 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.793291 4910 kubelet.go:418] "Attempting to sync node with API server" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.793327 4910 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.793367 4910 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.793387 4910 kubelet.go:324] "Adding apiserver pod source" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.793407 4910 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.797621 4910 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.800696 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.800814 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.800871 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.800937 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.801461 4910 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.804194 4910 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.805887 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.805931 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.805947 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.805961 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.805985 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.806025 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.806040 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.806061 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.806076 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.806090 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.806124 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.806137 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.806850 4910 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.807561 4910 server.go:1280] "Started kubelet" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.807995 4910 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.807816 4910 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.809523 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.810098 4910 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 26 21:55:15 crc systemd[1]: Started Kubernetes Kubelet. Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.820151 4910 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.820774 4910 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.821928 4910 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.822147 4910 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.822466 4910 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.821335 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.823263 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="200ms" Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.822314 4910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897ea97bbee7d29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,LastTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.826789 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.827022 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.827317 4910 factory.go:55] Registering systemd factory Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.827482 4910 factory.go:221] Registration of the systemd container factory successfully Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.827604 4910 server.go:460] "Adding debug handlers to kubelet server" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.828153 4910 factory.go:153] Registering CRI-O factory Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.829394 4910 factory.go:221] Registration of the crio container factory successfully Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.829534 4910 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.829598 4910 factory.go:103] Registering Raw factory Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.829638 4910 manager.go:1196] Started watching for new ooms in manager Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.830841 4910 manager.go:319] Starting recovery of all containers Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844316 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844415 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844439 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844461 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844480 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844499 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844517 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844537 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844558 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844576 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844595 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844614 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844636 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844657 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844676 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844694 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844712 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844734 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844756 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844774 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844792 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844814 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844833 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844853 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844871 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844890 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844914 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.844935 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845012 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845032 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845050 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845069 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845088 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845107 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845125 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845144 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845210 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845252 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845272 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845291 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845311 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845339 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845368 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845395 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845531 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845550 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845569 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845587 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845606 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845627 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845646 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845676 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845777 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845800 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845820 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845841 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845870 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845889 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845915 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845934 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.845983 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846002 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846021 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846041 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846060 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846079 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846099 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846190 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846262 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846290 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846321 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846410 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846429 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846448 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846466 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846486 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846532 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846550 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846569 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846587 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846607 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846625 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846645 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846663 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846718 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846766 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846831 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846850 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846869 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846887 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846904 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846923 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846975 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.846993 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.847010 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.847058 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.847141 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.847159 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.847202 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.847220 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849285 4910 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849325 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849345 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849365 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849387 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849544 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849568 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849590 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849635 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849720 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849742 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849761 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849784 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849829 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849850 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849891 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849930 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849948 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849966 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.849985 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850004 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850022 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850039 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850058 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850125 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850144 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850219 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850240 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850289 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850309 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850338 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850393 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850459 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850478 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850496 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850514 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850533 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850607 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850626 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850643 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850694 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850712 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850791 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850810 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850828 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850847 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850867 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.850962 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851005 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851024 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851077 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851154 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851203 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851221 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851238 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851257 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851317 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851334 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851353 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851371 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851388 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851406 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851427 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851491 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851533 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851554 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851574 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851593 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851610 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851627 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851676 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851695 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851737 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851755 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851774 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851792 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851810 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851829 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851847 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851894 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851951 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.851991 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852009 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852027 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852045 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852071 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852090 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852108 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852127 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852145 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852194 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852213 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852230 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852249 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852268 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852330 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852349 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852368 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852387 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852407 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852424 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852443 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852461 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852478 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852501 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852520 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852538 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852558 4910 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852576 4910 reconstruct.go:97] "Volume reconstruction finished" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.852590 4910 reconciler.go:26] "Reconciler: start to sync state" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.868606 4910 manager.go:324] Recovery completed Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.891825 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.893608 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.893644 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.893657 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.894496 4910 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.894515 4910 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.894535 4910 state_mem.go:36] "Initialized new in-memory state store" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.897025 4910 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.900116 4910 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.900214 4910 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.900246 4910 kubelet.go:2335] "Starting kubelet main sync loop" Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.900322 4910 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 26 21:55:15 crc kubenswrapper[4910]: W0226 21:55:15.900735 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.900786 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.913437 4910 policy_none.go:49] "None policy: Start" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.914666 4910 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 26 21:55:15 crc kubenswrapper[4910]: I0226 21:55:15.914693 4910 state_mem.go:35] "Initializing new in-memory state store" Feb 26 21:55:15 crc kubenswrapper[4910]: E0226 21:55:15.923117 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:55:16 crc kubenswrapper[4910]: E0226 21:55:16.000716 4910 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.000882 4910 manager.go:334] "Starting Device Plugin manager" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.000931 4910 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.000946 4910 server.go:79] "Starting device plugin registration server" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.002424 4910 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.002448 4910 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.002882 4910 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.002961 4910 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.002969 4910 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 26 21:55:16 crc kubenswrapper[4910]: E0226 21:55:16.014626 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:55:16 crc kubenswrapper[4910]: E0226 21:55:16.025278 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="400ms" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.103863 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.105222 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.105302 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.105326 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.105366 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:16 crc kubenswrapper[4910]: E0226 21:55:16.105983 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.201813 4910 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.201923 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.203472 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.203522 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.203539 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.203707 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.203909 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.203983 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.205218 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.205270 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.205295 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.205352 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.205440 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.205459 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.205478 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.206538 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.206603 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.206629 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.206815 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.207032 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.207097 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.207337 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.207419 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.207979 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.208015 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.208036 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.208289 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.208439 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.208508 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.209257 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.209310 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.209333 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.209925 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.209978 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.209994 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.210030 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.210053 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.210005 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.210202 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.210251 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.210274 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.210525 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.210576 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.211690 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.211752 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.211778 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.257798 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.257867 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.257903 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.257934 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258137 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258245 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258330 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258379 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258409 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258438 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258466 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258496 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258528 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258557 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.258648 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.306255 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.307881 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.307935 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.307953 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.307992 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:16 crc kubenswrapper[4910]: E0226 21:55:16.308630 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360287 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360332 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360370 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360400 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360432 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360462 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360524 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360530 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360587 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360614 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360553 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360590 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360551 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360645 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360636 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360687 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360729 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360739 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360577 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360769 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360794 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360840 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360880 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360913 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360923 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360953 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360970 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.360988 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.361021 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.361227 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: E0226 21:55:16.426770 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="800ms" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.540228 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.553810 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.580372 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.599245 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: W0226 21:55:16.603502 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fd45fec41f4be06b9bc81b927030ec3587e4ab98d343be3d9efb7584a5032553 WatchSource:0}: Error finding container fd45fec41f4be06b9bc81b927030ec3587e4ab98d343be3d9efb7584a5032553: Status 404 returned error can't find the container with id fd45fec41f4be06b9bc81b927030ec3587e4ab98d343be3d9efb7584a5032553 Feb 26 21:55:16 crc kubenswrapper[4910]: W0226 21:55:16.605674 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-12cb53f0ba8af7de07a88ccf21bdda89503ebd18b3f3bf193a3d12b8428bbe97 WatchSource:0}: Error finding container 12cb53f0ba8af7de07a88ccf21bdda89503ebd18b3f3bf193a3d12b8428bbe97: Status 404 returned error can't find the container with id 12cb53f0ba8af7de07a88ccf21bdda89503ebd18b3f3bf193a3d12b8428bbe97 Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.608498 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:55:16 crc kubenswrapper[4910]: W0226 21:55:16.613948 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4fe82956db47b418a0e92d5b9a239f85160b01d91e85a20d0560997b91023136 WatchSource:0}: Error finding container 4fe82956db47b418a0e92d5b9a239f85160b01d91e85a20d0560997b91023136: Status 404 returned error can't find the container with id 4fe82956db47b418a0e92d5b9a239f85160b01d91e85a20d0560997b91023136 Feb 26 21:55:16 crc kubenswrapper[4910]: W0226 21:55:16.620997 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-cd719cb25cb64c3f09062333c4d0e38cc170e8cdc57e34788840770c9e912461 WatchSource:0}: Error finding container cd719cb25cb64c3f09062333c4d0e38cc170e8cdc57e34788840770c9e912461: Status 404 returned error can't find the container with id cd719cb25cb64c3f09062333c4d0e38cc170e8cdc57e34788840770c9e912461 Feb 26 21:55:16 crc kubenswrapper[4910]: W0226 21:55:16.629125 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-67b31c095f0b686b0cdda4bf7ae171077adaa8be39b1dac0fd1ba47a894853e9 WatchSource:0}: Error finding container 67b31c095f0b686b0cdda4bf7ae171077adaa8be39b1dac0fd1ba47a894853e9: Status 404 returned error can't find the container with id 67b31c095f0b686b0cdda4bf7ae171077adaa8be39b1dac0fd1ba47a894853e9 Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.709456 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.711054 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.711108 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.711125 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.711217 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:16 crc kubenswrapper[4910]: E0226 21:55:16.712000 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.810341 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:16 crc kubenswrapper[4910]: W0226 21:55:16.828690 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:16 crc kubenswrapper[4910]: E0226 21:55:16.828798 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:16 crc kubenswrapper[4910]: W0226 21:55:16.853828 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:16 crc kubenswrapper[4910]: E0226 21:55:16.853924 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.904200 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd719cb25cb64c3f09062333c4d0e38cc170e8cdc57e34788840770c9e912461"} Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.905535 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4fe82956db47b418a0e92d5b9a239f85160b01d91e85a20d0560997b91023136"} Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.906695 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"12cb53f0ba8af7de07a88ccf21bdda89503ebd18b3f3bf193a3d12b8428bbe97"} Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.907524 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fd45fec41f4be06b9bc81b927030ec3587e4ab98d343be3d9efb7584a5032553"} Feb 26 21:55:16 crc kubenswrapper[4910]: I0226 21:55:16.908341 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"67b31c095f0b686b0cdda4bf7ae171077adaa8be39b1dac0fd1ba47a894853e9"} Feb 26 21:55:17 crc kubenswrapper[4910]: W0226 21:55:17.062999 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:17 crc kubenswrapper[4910]: E0226 21:55:17.063138 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:17 crc kubenswrapper[4910]: W0226 21:55:17.223478 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:17 crc kubenswrapper[4910]: E0226 21:55:17.223588 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:17 crc kubenswrapper[4910]: E0226 21:55:17.228226 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="1.6s" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.512821 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.514606 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.514644 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.514655 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.514680 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:17 crc kubenswrapper[4910]: E0226 21:55:17.515087 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.810720 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.834156 4910 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 21:55:17 crc kubenswrapper[4910]: E0226 21:55:17.834966 4910 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.912037 4910 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0" exitCode=0 Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.912097 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0"} Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.912219 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.912902 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.912924 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.912932 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.914476 4910 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246" exitCode=0 Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.914517 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246"} Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.914563 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.915204 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.915238 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.915260 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.917381 4910 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009" exitCode=0 Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.917429 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009"} Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.917520 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.918315 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.918367 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.918389 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.920665 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097" exitCode=0 Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.920763 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097"} Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.920808 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.922331 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.922360 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.922372 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.924501 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff"} Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.924593 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e77cc229716b28e61b23d386e84c0ac84b010c02cd46a8a6f7b1735fdf02b24"} Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.924611 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17"} Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.927112 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.927905 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.927920 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:17 crc kubenswrapper[4910]: I0226 21:55:17.927927 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.810778 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:18 crc kubenswrapper[4910]: E0226 21:55:18.829036 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="3.2s" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.937596 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.937638 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.937649 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.937715 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.940534 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.940580 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.940589 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.943475 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.943534 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.943550 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.943561 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.945534 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.945651 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.946923 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.946949 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.946958 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.947885 4910 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9" exitCode=0 Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.947964 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.948055 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.948997 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.949022 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.949032 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.949834 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a"} Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.949915 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.951638 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.951657 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:18 crc kubenswrapper[4910]: I0226 21:55:18.951665 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.115850 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.135864 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.135894 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.135903 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.135927 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:19 crc kubenswrapper[4910]: E0226 21:55:19.136365 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Feb 26 21:55:19 crc kubenswrapper[4910]: W0226 21:55:19.216313 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:19 crc kubenswrapper[4910]: E0226 21:55:19.216457 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:19 crc kubenswrapper[4910]: W0226 21:55:19.255008 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Feb 26 21:55:19 crc kubenswrapper[4910]: E0226 21:55:19.255138 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.223:6443: connect: connection refused" logger="UnhandledError" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.956830 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.956815 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"df0965cfa0dfeffdbdd105d67fe00ed3158d5145ab987e3ba3f00b0849ee5eeb"} Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.958008 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.958042 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.958053 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.958995 4910 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696" exitCode=0 Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.959073 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.959098 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.959188 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.959192 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696"} Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.959267 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.959462 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.960615 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.960668 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.960683 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.960720 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.960737 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.960687 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.960877 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.960917 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.960939 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.962117 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.962194 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:19 crc kubenswrapper[4910]: I0226 21:55:19.962212 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.425688 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.963415 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.969116 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d3133f944884e882b5f9ef27a231c66d5dc875ce598f6f873800068d8d91d1fc"} Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.969200 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4babf71eea3ac8c428ccc06dd30d6050c38c2ca1db1369bea420ee6f22a1c8d0"} Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.969225 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9363ec915adb6ccea5cef83bea6f316ef62406876e85e4bd8f9169f713e9dedd"} Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.969277 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.969338 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.969344 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.970854 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.970918 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.970864 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.970987 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.971019 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:20 crc kubenswrapper[4910]: I0226 21:55:20.971045 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.690617 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.980974 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb0c9fe8f846e6307700a6e78bb8af0ce159b62ff979b434b4520792296601f7"} Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.981085 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"29d5c8e707107c8468c8c93dad9ab2ac1942031a7d44ca608d617ad624b776d4"} Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.981017 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.981101 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.982829 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.982895 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.982919 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.983750 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.983791 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:21 crc kubenswrapper[4910]: I0226 21:55:21.983801 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.002361 4910 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.337392 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.339289 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.339341 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.339360 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.339394 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.984023 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.984128 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.985289 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.985329 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.985342 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.985753 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.985810 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:22 crc kubenswrapper[4910]: I0226 21:55:22.985830 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:23 crc kubenswrapper[4910]: I0226 21:55:23.597854 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:23 crc kubenswrapper[4910]: I0226 21:55:23.598118 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:23 crc kubenswrapper[4910]: I0226 21:55:23.599669 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:23 crc kubenswrapper[4910]: I0226 21:55:23.599740 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:23 crc kubenswrapper[4910]: I0226 21:55:23.599758 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.500789 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.501076 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.502565 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.502612 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.502631 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.621449 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.621720 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.623352 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.623638 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:24 crc kubenswrapper[4910]: I0226 21:55:24.623778 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.023502 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.023746 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.025132 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.025244 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.025265 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.031414 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.992589 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.992718 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.993885 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.993939 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:25 crc kubenswrapper[4910]: I0226 21:55:25.993958 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:26 crc kubenswrapper[4910]: E0226 21:55:26.014977 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.422613 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.902567 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.903144 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.905040 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.905321 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.905492 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.995535 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.997710 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.997817 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:26 crc kubenswrapper[4910]: I0226 21:55:26.997841 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:27 crc kubenswrapper[4910]: I0226 21:55:27.999051 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:28 crc kubenswrapper[4910]: I0226 21:55:28.001037 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:28 crc kubenswrapper[4910]: I0226 21:55:28.001117 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:28 crc kubenswrapper[4910]: I0226 21:55:28.001145 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:28 crc kubenswrapper[4910]: I0226 21:55:28.008384 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:29 crc kubenswrapper[4910]: I0226 21:55:29.001594 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:29 crc kubenswrapper[4910]: I0226 21:55:29.003239 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:29 crc kubenswrapper[4910]: I0226 21:55:29.003320 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:29 crc kubenswrapper[4910]: I0226 21:55:29.003346 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:29 crc kubenswrapper[4910]: I0226 21:55:29.423491 4910 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 21:55:29 crc kubenswrapper[4910]: I0226 21:55:29.423616 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 21:55:29 crc kubenswrapper[4910]: I0226 21:55:29.811811 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 26 21:55:29 crc kubenswrapper[4910]: W0226 21:55:29.958200 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 26 21:55:29 crc kubenswrapper[4910]: I0226 21:55:29.958327 4910 trace.go:236] Trace[1139282635]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Feb-2026 21:55:19.957) (total time: 10000ms): Feb 26 21:55:29 crc kubenswrapper[4910]: Trace[1139282635]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (21:55:29.958) Feb 26 21:55:29 crc kubenswrapper[4910]: Trace[1139282635]: [10.000932774s] [10.000932774s] END Feb 26 21:55:29 crc kubenswrapper[4910]: E0226 21:55:29.958361 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 26 21:55:30 crc kubenswrapper[4910]: W0226 21:55:30.244136 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 26 21:55:30 crc kubenswrapper[4910]: I0226 21:55:30.244276 4910 trace.go:236] Trace[1816526930]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Feb-2026 21:55:20.242) (total time: 10001ms): Feb 26 21:55:30 crc kubenswrapper[4910]: Trace[1816526930]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:55:30.244) Feb 26 21:55:30 crc kubenswrapper[4910]: Trace[1816526930]: [10.001833837s] [10.001833837s] END Feb 26 21:55:30 crc kubenswrapper[4910]: E0226 21:55:30.244302 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 26 21:55:30 crc kubenswrapper[4910]: I0226 21:55:30.503547 4910 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 21:55:30 crc kubenswrapper[4910]: I0226 21:55:30.503642 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 21:55:30 crc kubenswrapper[4910]: E0226 21:55:30.511676 4910 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:30 crc kubenswrapper[4910]: E0226 21:55:30.512907 4910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:30Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897ea97bbee7d29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,LastTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:55:30 crc kubenswrapper[4910]: E0226 21:55:30.513193 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:30Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 21:55:30 crc kubenswrapper[4910]: E0226 21:55:30.513310 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:30Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 26 21:55:30 crc kubenswrapper[4910]: W0226 21:55:30.516869 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:30Z is after 2026-02-23T05:33:13Z Feb 26 21:55:30 crc kubenswrapper[4910]: E0226 21:55:30.516952 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:30 crc kubenswrapper[4910]: I0226 21:55:30.518310 4910 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 21:55:30 crc kubenswrapper[4910]: I0226 21:55:30.518399 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 21:55:30 crc kubenswrapper[4910]: W0226 21:55:30.521565 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:30Z is after 2026-02-23T05:33:13Z Feb 26 21:55:30 crc kubenswrapper[4910]: E0226 21:55:30.521628 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:30 crc kubenswrapper[4910]: I0226 21:55:30.815227 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:30Z is after 2026-02-23T05:33:13Z Feb 26 21:55:30 crc kubenswrapper[4910]: I0226 21:55:30.971337 4910 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]log ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]etcd ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/priority-and-fairness-filter ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-apiextensions-informers ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-apiextensions-controllers ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/crd-informer-synced ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-system-namespaces-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 26 21:55:30 crc kubenswrapper[4910]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 26 21:55:30 crc kubenswrapper[4910]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/bootstrap-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/start-kube-aggregator-informers ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/apiservice-registration-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/apiservice-discovery-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]autoregister-completion ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/apiservice-openapi-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 26 21:55:30 crc kubenswrapper[4910]: livez check failed Feb 26 21:55:30 crc kubenswrapper[4910]: I0226 21:55:30.971431 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:55:31 crc kubenswrapper[4910]: I0226 21:55:31.009324 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 21:55:31 crc kubenswrapper[4910]: I0226 21:55:31.013109 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="df0965cfa0dfeffdbdd105d67fe00ed3158d5145ab987e3ba3f00b0849ee5eeb" exitCode=255 Feb 26 21:55:31 crc kubenswrapper[4910]: I0226 21:55:31.013188 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"df0965cfa0dfeffdbdd105d67fe00ed3158d5145ab987e3ba3f00b0849ee5eeb"} Feb 26 21:55:31 crc kubenswrapper[4910]: I0226 21:55:31.013399 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:31 crc kubenswrapper[4910]: I0226 21:55:31.014655 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:31 crc kubenswrapper[4910]: I0226 21:55:31.014704 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:31 crc kubenswrapper[4910]: I0226 21:55:31.014721 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:31 crc kubenswrapper[4910]: I0226 21:55:31.015513 4910 scope.go:117] "RemoveContainer" containerID="df0965cfa0dfeffdbdd105d67fe00ed3158d5145ab987e3ba3f00b0849ee5eeb" Feb 26 21:55:31 crc kubenswrapper[4910]: I0226 21:55:31.813533 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:31Z is after 2026-02-23T05:33:13Z Feb 26 21:55:32 crc kubenswrapper[4910]: I0226 21:55:32.019808 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 21:55:32 crc kubenswrapper[4910]: I0226 21:55:32.022897 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55"} Feb 26 21:55:32 crc kubenswrapper[4910]: I0226 21:55:32.023204 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:32 crc kubenswrapper[4910]: I0226 21:55:32.024457 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:32 crc kubenswrapper[4910]: I0226 21:55:32.024527 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:32 crc kubenswrapper[4910]: I0226 21:55:32.024547 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:32 crc kubenswrapper[4910]: I0226 21:55:32.816040 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:32Z is after 2026-02-23T05:33:13Z Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.027903 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.028645 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.031316 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55" exitCode=255 Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.031385 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55"} Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.031456 4910 scope.go:117] "RemoveContainer" containerID="df0965cfa0dfeffdbdd105d67fe00ed3158d5145ab987e3ba3f00b0849ee5eeb" Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.031683 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.033002 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.033064 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.033083 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.034047 4910 scope.go:117] "RemoveContainer" containerID="900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55" Feb 26 21:55:33 crc kubenswrapper[4910]: E0226 21:55:33.034369 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:55:33 crc kubenswrapper[4910]: W0226 21:55:33.730509 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:33Z is after 2026-02-23T05:33:13Z Feb 26 21:55:33 crc kubenswrapper[4910]: E0226 21:55:33.730618 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:33Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:33 crc kubenswrapper[4910]: I0226 21:55:33.814427 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:33Z is after 2026-02-23T05:33:13Z Feb 26 21:55:34 crc kubenswrapper[4910]: I0226 21:55:34.036484 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 21:55:34 crc kubenswrapper[4910]: W0226 21:55:34.221993 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:34Z is after 2026-02-23T05:33:13Z Feb 26 21:55:34 crc kubenswrapper[4910]: E0226 21:55:34.222122 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:34Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:34 crc kubenswrapper[4910]: I0226 21:55:34.815422 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:34Z is after 2026-02-23T05:33:13Z Feb 26 21:55:35 crc kubenswrapper[4910]: I0226 21:55:35.817146 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:35Z is after 2026-02-23T05:33:13Z Feb 26 21:55:35 crc kubenswrapper[4910]: I0226 21:55:35.973978 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:35 crc kubenswrapper[4910]: I0226 21:55:35.974272 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:35 crc kubenswrapper[4910]: I0226 21:55:35.975836 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:35 crc kubenswrapper[4910]: I0226 21:55:35.976048 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:35 crc kubenswrapper[4910]: I0226 21:55:35.976232 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:35 crc kubenswrapper[4910]: I0226 21:55:35.977340 4910 scope.go:117] "RemoveContainer" containerID="900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55" Feb 26 21:55:35 crc kubenswrapper[4910]: E0226 21:55:35.977801 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:55:35 crc kubenswrapper[4910]: I0226 21:55:35.981820 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:36 crc kubenswrapper[4910]: E0226 21:55:36.015071 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.043887 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.044969 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.045066 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.045129 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.045702 4910 scope.go:117] "RemoveContainer" containerID="900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55" Feb 26 21:55:36 crc kubenswrapper[4910]: E0226 21:55:36.045937 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.817385 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:36Z is after 2026-02-23T05:33:13Z Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.913576 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.915452 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.915548 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.915585 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.915625 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:36 crc kubenswrapper[4910]: E0226 21:55:36.918913 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:36Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 21:55:36 crc kubenswrapper[4910]: E0226 21:55:36.923416 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:36Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.946038 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.946358 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.947808 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.947859 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.947876 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:36 crc kubenswrapper[4910]: I0226 21:55:36.967764 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 26 21:55:37 crc kubenswrapper[4910]: I0226 21:55:37.046879 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:37 crc kubenswrapper[4910]: I0226 21:55:37.048241 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:37 crc kubenswrapper[4910]: I0226 21:55:37.048303 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:37 crc kubenswrapper[4910]: I0226 21:55:37.048324 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:37 crc kubenswrapper[4910]: I0226 21:55:37.816050 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:37Z is after 2026-02-23T05:33:13Z Feb 26 21:55:38 crc kubenswrapper[4910]: I0226 21:55:38.234261 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:38 crc kubenswrapper[4910]: I0226 21:55:38.234472 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:38 crc kubenswrapper[4910]: I0226 21:55:38.235934 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:38 crc kubenswrapper[4910]: I0226 21:55:38.235985 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:38 crc kubenswrapper[4910]: I0226 21:55:38.236003 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:38 crc kubenswrapper[4910]: I0226 21:55:38.236838 4910 scope.go:117] "RemoveContainer" containerID="900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55" Feb 26 21:55:38 crc kubenswrapper[4910]: E0226 21:55:38.237119 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:55:38 crc kubenswrapper[4910]: I0226 21:55:38.689579 4910 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 21:55:38 crc kubenswrapper[4910]: E0226 21:55:38.695593 4910 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:38Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:38 crc kubenswrapper[4910]: I0226 21:55:38.816124 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:38Z is after 2026-02-23T05:33:13Z Feb 26 21:55:39 crc kubenswrapper[4910]: I0226 21:55:39.423280 4910 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 21:55:39 crc kubenswrapper[4910]: I0226 21:55:39.423464 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 21:55:39 crc kubenswrapper[4910]: I0226 21:55:39.815665 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:39Z is after 2026-02-23T05:33:13Z Feb 26 21:55:40 crc kubenswrapper[4910]: W0226 21:55:40.416295 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:40Z is after 2026-02-23T05:33:13Z Feb 26 21:55:40 crc kubenswrapper[4910]: E0226 21:55:40.416561 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:40 crc kubenswrapper[4910]: E0226 21:55:40.519050 4910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:40Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897ea97bbee7d29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,LastTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:55:40 crc kubenswrapper[4910]: I0226 21:55:40.814871 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:40Z is after 2026-02-23T05:33:13Z Feb 26 21:55:41 crc kubenswrapper[4910]: I0226 21:55:41.691477 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:41 crc kubenswrapper[4910]: I0226 21:55:41.691772 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:41 crc kubenswrapper[4910]: I0226 21:55:41.693343 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:41 crc kubenswrapper[4910]: I0226 21:55:41.693398 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:41 crc kubenswrapper[4910]: I0226 21:55:41.693419 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:41 crc kubenswrapper[4910]: I0226 21:55:41.694338 4910 scope.go:117] "RemoveContainer" containerID="900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55" Feb 26 21:55:41 crc kubenswrapper[4910]: E0226 21:55:41.694659 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:55:41 crc kubenswrapper[4910]: I0226 21:55:41.815685 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:41Z is after 2026-02-23T05:33:13Z Feb 26 21:55:42 crc kubenswrapper[4910]: W0226 21:55:42.787663 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:42Z is after 2026-02-23T05:33:13Z Feb 26 21:55:42 crc kubenswrapper[4910]: E0226 21:55:42.788802 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:42 crc kubenswrapper[4910]: I0226 21:55:42.815566 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:42Z is after 2026-02-23T05:33:13Z Feb 26 21:55:42 crc kubenswrapper[4910]: W0226 21:55:42.989931 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:42Z is after 2026-02-23T05:33:13Z Feb 26 21:55:42 crc kubenswrapper[4910]: E0226 21:55:42.990038 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:43 crc kubenswrapper[4910]: W0226 21:55:43.556874 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:43Z is after 2026-02-23T05:33:13Z Feb 26 21:55:43 crc kubenswrapper[4910]: E0226 21:55:43.557453 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:43 crc kubenswrapper[4910]: I0226 21:55:43.815090 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:43Z is after 2026-02-23T05:33:13Z Feb 26 21:55:43 crc kubenswrapper[4910]: I0226 21:55:43.924208 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:43 crc kubenswrapper[4910]: I0226 21:55:43.925763 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:43 crc kubenswrapper[4910]: I0226 21:55:43.925824 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:43 crc kubenswrapper[4910]: I0226 21:55:43.925848 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:43 crc kubenswrapper[4910]: I0226 21:55:43.925892 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:43 crc kubenswrapper[4910]: E0226 21:55:43.926522 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:43Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 21:55:43 crc kubenswrapper[4910]: E0226 21:55:43.928628 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:43Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 21:55:44 crc kubenswrapper[4910]: I0226 21:55:44.814392 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:44Z is after 2026-02-23T05:33:13Z Feb 26 21:55:45 crc kubenswrapper[4910]: I0226 21:55:45.816111 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:45Z is after 2026-02-23T05:33:13Z Feb 26 21:55:46 crc kubenswrapper[4910]: E0226 21:55:46.015231 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:55:46 crc kubenswrapper[4910]: I0226 21:55:46.815206 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:46Z is after 2026-02-23T05:33:13Z Feb 26 21:55:47 crc kubenswrapper[4910]: I0226 21:55:47.815015 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:47Z is after 2026-02-23T05:33:13Z Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.626302 4910 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:54060->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.626392 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:54060->192.168.126.11:10357: read: connection reset by peer" Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.626497 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.626748 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.628231 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.628310 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.628331 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.629193 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"9e77cc229716b28e61b23d386e84c0ac84b010c02cd46a8a6f7b1735fdf02b24"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.629511 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://9e77cc229716b28e61b23d386e84c0ac84b010c02cd46a8a6f7b1735fdf02b24" gracePeriod=30 Feb 26 21:55:48 crc kubenswrapper[4910]: I0226 21:55:48.813792 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:48Z is after 2026-02-23T05:33:13Z Feb 26 21:55:49 crc kubenswrapper[4910]: I0226 21:55:49.084750 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 21:55:49 crc kubenswrapper[4910]: I0226 21:55:49.085187 4910 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9e77cc229716b28e61b23d386e84c0ac84b010c02cd46a8a6f7b1735fdf02b24" exitCode=255 Feb 26 21:55:49 crc kubenswrapper[4910]: I0226 21:55:49.085232 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9e77cc229716b28e61b23d386e84c0ac84b010c02cd46a8a6f7b1735fdf02b24"} Feb 26 21:55:49 crc kubenswrapper[4910]: I0226 21:55:49.085269 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc"} Feb 26 21:55:49 crc kubenswrapper[4910]: I0226 21:55:49.085376 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:49 crc kubenswrapper[4910]: I0226 21:55:49.086207 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:49 crc kubenswrapper[4910]: I0226 21:55:49.086240 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:49 crc kubenswrapper[4910]: I0226 21:55:49.086253 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:49 crc kubenswrapper[4910]: I0226 21:55:49.815836 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:49Z is after 2026-02-23T05:33:13Z Feb 26 21:55:50 crc kubenswrapper[4910]: E0226 21:55:50.525518 4910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:50Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897ea97bbee7d29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,LastTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:55:50 crc kubenswrapper[4910]: I0226 21:55:50.816375 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:50Z is after 2026-02-23T05:33:13Z Feb 26 21:55:50 crc kubenswrapper[4910]: I0226 21:55:50.929459 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:50 crc kubenswrapper[4910]: I0226 21:55:50.931264 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:50 crc kubenswrapper[4910]: I0226 21:55:50.931318 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:50 crc kubenswrapper[4910]: I0226 21:55:50.931336 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:50 crc kubenswrapper[4910]: I0226 21:55:50.931369 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:50 crc kubenswrapper[4910]: E0226 21:55:50.933654 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:50Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 21:55:50 crc kubenswrapper[4910]: E0226 21:55:50.936550 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:50Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 21:55:51 crc kubenswrapper[4910]: I0226 21:55:51.815691 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:51Z is after 2026-02-23T05:33:13Z Feb 26 21:55:52 crc kubenswrapper[4910]: I0226 21:55:52.815629 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:52Z is after 2026-02-23T05:33:13Z Feb 26 21:55:53 crc kubenswrapper[4910]: I0226 21:55:53.598744 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:53 crc kubenswrapper[4910]: I0226 21:55:53.598958 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:53 crc kubenswrapper[4910]: I0226 21:55:53.600644 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:53 crc kubenswrapper[4910]: I0226 21:55:53.600717 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:53 crc kubenswrapper[4910]: I0226 21:55:53.600747 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:53 crc kubenswrapper[4910]: I0226 21:55:53.814813 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:53Z is after 2026-02-23T05:33:13Z Feb 26 21:55:54 crc kubenswrapper[4910]: I0226 21:55:54.815107 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:54Z is after 2026-02-23T05:33:13Z Feb 26 21:55:54 crc kubenswrapper[4910]: I0226 21:55:54.828766 4910 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 21:55:54 crc kubenswrapper[4910]: E0226 21:55:54.834702 4910 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:54 crc kubenswrapper[4910]: E0226 21:55:54.835969 4910 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 26 21:55:55 crc kubenswrapper[4910]: I0226 21:55:55.815633 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:55Z is after 2026-02-23T05:33:13Z Feb 26 21:55:55 crc kubenswrapper[4910]: I0226 21:55:55.901229 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:55 crc kubenswrapper[4910]: I0226 21:55:55.902848 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:55 crc kubenswrapper[4910]: I0226 21:55:55.902911 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:55 crc kubenswrapper[4910]: I0226 21:55:55.902929 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:55 crc kubenswrapper[4910]: I0226 21:55:55.903825 4910 scope.go:117] "RemoveContainer" containerID="900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55" Feb 26 21:55:56 crc kubenswrapper[4910]: E0226 21:55:56.015452 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:55:56 crc kubenswrapper[4910]: I0226 21:55:56.423481 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:55:56 crc kubenswrapper[4910]: I0226 21:55:56.423673 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:56 crc kubenswrapper[4910]: I0226 21:55:56.425145 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:56 crc kubenswrapper[4910]: I0226 21:55:56.425316 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:56 crc kubenswrapper[4910]: I0226 21:55:56.425338 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:56 crc kubenswrapper[4910]: W0226 21:55:56.594374 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:56Z is after 2026-02-23T05:33:13Z Feb 26 21:55:56 crc kubenswrapper[4910]: E0226 21:55:56.594498 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:55:56 crc kubenswrapper[4910]: I0226 21:55:56.814037 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:56Z is after 2026-02-23T05:33:13Z Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.113049 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.114082 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.117658 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="94324dea403866bbde2a36c9155f63dd6168b224e9d26ac1cf7af1e4de1d48a6" exitCode=255 Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.117720 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"94324dea403866bbde2a36c9155f63dd6168b224e9d26ac1cf7af1e4de1d48a6"} Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.117771 4910 scope.go:117] "RemoveContainer" containerID="900b80f58fe669b61d631e6f83af2b45af9b43e5acd1c07b651f090825400a55" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.121142 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.125939 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.126014 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.126037 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.127010 4910 scope.go:117] "RemoveContainer" containerID="94324dea403866bbde2a36c9155f63dd6168b224e9d26ac1cf7af1e4de1d48a6" Feb 26 21:55:57 crc kubenswrapper[4910]: E0226 21:55:57.127422 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.815377 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:57Z is after 2026-02-23T05:33:13Z Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.937101 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.938960 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.939019 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.939037 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:57 crc kubenswrapper[4910]: I0226 21:55:57.939143 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:55:57 crc kubenswrapper[4910]: E0226 21:55:57.939878 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:57Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 21:55:57 crc kubenswrapper[4910]: E0226 21:55:57.944257 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:57Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 21:55:58 crc kubenswrapper[4910]: I0226 21:55:58.123383 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 21:55:58 crc kubenswrapper[4910]: I0226 21:55:58.234950 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:55:58 crc kubenswrapper[4910]: I0226 21:55:58.235210 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:55:58 crc kubenswrapper[4910]: I0226 21:55:58.236770 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:55:58 crc kubenswrapper[4910]: I0226 21:55:58.236968 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:55:58 crc kubenswrapper[4910]: I0226 21:55:58.237114 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:55:58 crc kubenswrapper[4910]: I0226 21:55:58.238259 4910 scope.go:117] "RemoveContainer" containerID="94324dea403866bbde2a36c9155f63dd6168b224e9d26ac1cf7af1e4de1d48a6" Feb 26 21:55:58 crc kubenswrapper[4910]: E0226 21:55:58.238559 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:55:58 crc kubenswrapper[4910]: I0226 21:55:58.814752 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:58Z is after 2026-02-23T05:33:13Z Feb 26 21:55:59 crc kubenswrapper[4910]: I0226 21:55:59.423623 4910 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 21:55:59 crc kubenswrapper[4910]: I0226 21:55:59.423786 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 21:55:59 crc kubenswrapper[4910]: I0226 21:55:59.815442 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:55:59Z is after 2026-02-23T05:33:13Z Feb 26 21:56:00 crc kubenswrapper[4910]: E0226 21:56:00.531688 4910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:00Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897ea97bbee7d29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,LastTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:00 crc kubenswrapper[4910]: I0226 21:56:00.815048 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:00Z is after 2026-02-23T05:33:13Z Feb 26 21:56:01 crc kubenswrapper[4910]: I0226 21:56:01.691270 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:56:01 crc kubenswrapper[4910]: I0226 21:56:01.691500 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:01 crc kubenswrapper[4910]: I0226 21:56:01.693125 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:01 crc kubenswrapper[4910]: I0226 21:56:01.693222 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:01 crc kubenswrapper[4910]: I0226 21:56:01.693241 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:01 crc kubenswrapper[4910]: I0226 21:56:01.694022 4910 scope.go:117] "RemoveContainer" containerID="94324dea403866bbde2a36c9155f63dd6168b224e9d26ac1cf7af1e4de1d48a6" Feb 26 21:56:01 crc kubenswrapper[4910]: E0226 21:56:01.694331 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:56:01 crc kubenswrapper[4910]: W0226 21:56:01.814383 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:01Z is after 2026-02-23T05:33:13Z Feb 26 21:56:01 crc kubenswrapper[4910]: E0226 21:56:01.814518 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:56:01 crc kubenswrapper[4910]: I0226 21:56:01.817620 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:01Z is after 2026-02-23T05:33:13Z Feb 26 21:56:02 crc kubenswrapper[4910]: W0226 21:56:02.351379 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:02Z is after 2026-02-23T05:33:13Z Feb 26 21:56:02 crc kubenswrapper[4910]: E0226 21:56:02.351493 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:56:02 crc kubenswrapper[4910]: I0226 21:56:02.814300 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:02Z is after 2026-02-23T05:33:13Z Feb 26 21:56:03 crc kubenswrapper[4910]: I0226 21:56:03.815220 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:03Z is after 2026-02-23T05:33:13Z Feb 26 21:56:04 crc kubenswrapper[4910]: I0226 21:56:04.815201 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:04Z is after 2026-02-23T05:33:13Z Feb 26 21:56:04 crc kubenswrapper[4910]: I0226 21:56:04.944712 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:04 crc kubenswrapper[4910]: E0226 21:56:04.944751 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:04Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 21:56:04 crc kubenswrapper[4910]: I0226 21:56:04.946987 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:04 crc kubenswrapper[4910]: I0226 21:56:04.947093 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:04 crc kubenswrapper[4910]: I0226 21:56:04.947127 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:04 crc kubenswrapper[4910]: I0226 21:56:04.947206 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:56:04 crc kubenswrapper[4910]: E0226 21:56:04.952077 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:04Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 21:56:05 crc kubenswrapper[4910]: I0226 21:56:05.815401 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:05Z is after 2026-02-23T05:33:13Z Feb 26 21:56:06 crc kubenswrapper[4910]: E0226 21:56:06.016461 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:56:06 crc kubenswrapper[4910]: W0226 21:56:06.238126 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:06Z is after 2026-02-23T05:33:13Z Feb 26 21:56:06 crc kubenswrapper[4910]: E0226 21:56:06.238303 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 21:56:06 crc kubenswrapper[4910]: I0226 21:56:06.815340 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:06Z is after 2026-02-23T05:33:13Z Feb 26 21:56:07 crc kubenswrapper[4910]: I0226 21:56:07.815787 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:07Z is after 2026-02-23T05:33:13Z Feb 26 21:56:08 crc kubenswrapper[4910]: I0226 21:56:08.815070 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:08Z is after 2026-02-23T05:33:13Z Feb 26 21:56:09 crc kubenswrapper[4910]: I0226 21:56:09.423461 4910 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 21:56:09 crc kubenswrapper[4910]: I0226 21:56:09.423601 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 21:56:09 crc kubenswrapper[4910]: I0226 21:56:09.814856 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:09Z is after 2026-02-23T05:33:13Z Feb 26 21:56:10 crc kubenswrapper[4910]: I0226 21:56:10.432504 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 21:56:10 crc kubenswrapper[4910]: I0226 21:56:10.432718 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:10 crc kubenswrapper[4910]: I0226 21:56:10.434333 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:10 crc kubenswrapper[4910]: I0226 21:56:10.434395 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:10 crc kubenswrapper[4910]: I0226 21:56:10.434416 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:10 crc kubenswrapper[4910]: E0226 21:56:10.537511 4910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:10Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897ea97bbee7d29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,LastTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:10 crc kubenswrapper[4910]: I0226 21:56:10.815878 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:10Z is after 2026-02-23T05:33:13Z Feb 26 21:56:11 crc kubenswrapper[4910]: I0226 21:56:11.814622 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:11Z is after 2026-02-23T05:33:13Z Feb 26 21:56:11 crc kubenswrapper[4910]: E0226 21:56:11.952021 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:11Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 21:56:11 crc kubenswrapper[4910]: I0226 21:56:11.953212 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:11 crc kubenswrapper[4910]: I0226 21:56:11.955002 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:11 crc kubenswrapper[4910]: I0226 21:56:11.955080 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:11 crc kubenswrapper[4910]: I0226 21:56:11.955103 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:11 crc kubenswrapper[4910]: I0226 21:56:11.955141 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:56:11 crc kubenswrapper[4910]: E0226 21:56:11.961526 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:56:11Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 21:56:12 crc kubenswrapper[4910]: I0226 21:56:12.818023 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:13 crc kubenswrapper[4910]: I0226 21:56:13.818556 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:13 crc kubenswrapper[4910]: I0226 21:56:13.900966 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:13 crc kubenswrapper[4910]: I0226 21:56:13.902555 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:13 crc kubenswrapper[4910]: I0226 21:56:13.902604 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:13 crc kubenswrapper[4910]: I0226 21:56:13.902622 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:13 crc kubenswrapper[4910]: I0226 21:56:13.903453 4910 scope.go:117] "RemoveContainer" containerID="94324dea403866bbde2a36c9155f63dd6168b224e9d26ac1cf7af1e4de1d48a6" Feb 26 21:56:13 crc kubenswrapper[4910]: E0226 21:56:13.903752 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:56:14 crc kubenswrapper[4910]: I0226 21:56:14.817039 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:15 crc kubenswrapper[4910]: I0226 21:56:15.819121 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:16 crc kubenswrapper[4910]: E0226 21:56:16.017422 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:56:16 crc kubenswrapper[4910]: I0226 21:56:16.815037 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:17 crc kubenswrapper[4910]: I0226 21:56:17.817558 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:18 crc kubenswrapper[4910]: I0226 21:56:18.818135 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:18 crc kubenswrapper[4910]: I0226 21:56:18.961660 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:18 crc kubenswrapper[4910]: E0226 21:56:18.962815 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 21:56:18 crc kubenswrapper[4910]: I0226 21:56:18.963496 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:18 crc kubenswrapper[4910]: I0226 21:56:18.963577 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:18 crc kubenswrapper[4910]: I0226 21:56:18.963606 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:18 crc kubenswrapper[4910]: I0226 21:56:18.963657 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:56:18 crc kubenswrapper[4910]: E0226 21:56:18.970338 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.295268 4910 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:39400->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.295944 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:39400->192.168.126.11:10357: read: connection reset by peer" Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.296045 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.296268 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.298470 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.300246 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.300757 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.302474 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.303301 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc" gracePeriod=30 Feb 26 21:56:19 crc kubenswrapper[4910]: I0226 21:56:19.818907 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.192092 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.193048 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.193548 4910 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc" exitCode=255 Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.193599 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc"} Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.193633 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83"} Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.193655 4910 scope.go:117] "RemoveContainer" containerID="9e77cc229716b28e61b23d386e84c0ac84b010c02cd46a8a6f7b1735fdf02b24" Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.193822 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.196181 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.196212 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.196222 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.542240 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97bbee7d29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,LastTimestamp:2026-02-26 21:55:15.807509801 +0000 UTC m=+0.887000372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.546373 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110a90f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893635343 +0000 UTC m=+0.973125894,LastTimestamp:2026-02-26 21:55:15.893635343 +0000 UTC m=+0.973125894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.551767 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110ebc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893652424 +0000 UTC m=+0.973142975,LastTimestamp:2026-02-26 21:55:15.893652424 +0000 UTC m=+0.973142975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.559068 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c1111586 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.89366311 +0000 UTC m=+0.973153661,LastTimestamp:2026-02-26 21:55:15.89366311 +0000 UTC m=+0.973153661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.563419 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c7e2149c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:16.008023196 +0000 UTC m=+1.087513777,LastTimestamp:2026-02-26 21:55:16.008023196 +0000 UTC m=+1.087513777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.567908 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110a90f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110a90f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893635343 +0000 UTC m=+0.973125894,LastTimestamp:2026-02-26 21:55:16.105272174 +0000 UTC m=+1.184762755,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.572201 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110ebc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110ebc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893652424 +0000 UTC m=+0.973142975,LastTimestamp:2026-02-26 21:55:16.105316723 +0000 UTC m=+1.184807304,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.577213 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c1111586\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c1111586 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.89366311 +0000 UTC m=+0.973153661,LastTimestamp:2026-02-26 21:55:16.105337396 +0000 UTC m=+1.184827967,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.584365 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110a90f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110a90f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893635343 +0000 UTC m=+0.973125894,LastTimestamp:2026-02-26 21:55:16.203503193 +0000 UTC m=+1.282993774,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.589976 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110ebc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110ebc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893652424 +0000 UTC m=+0.973142975,LastTimestamp:2026-02-26 21:55:16.203533632 +0000 UTC m=+1.283024213,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.595952 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c1111586\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c1111586 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.89366311 +0000 UTC m=+0.973153661,LastTimestamp:2026-02-26 21:55:16.203548281 +0000 UTC m=+1.283038862,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.602252 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110a90f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110a90f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893635343 +0000 UTC m=+0.973125894,LastTimestamp:2026-02-26 21:55:16.205249744 +0000 UTC m=+1.284740315,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.609079 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110ebc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110ebc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893652424 +0000 UTC m=+0.973142975,LastTimestamp:2026-02-26 21:55:16.205284636 +0000 UTC m=+1.284775217,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.616460 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c1111586\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c1111586 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.89366311 +0000 UTC m=+0.973153661,LastTimestamp:2026-02-26 21:55:16.205304679 +0000 UTC m=+1.284795250,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.622897 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110a90f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110a90f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893635343 +0000 UTC m=+0.973125894,LastTimestamp:2026-02-26 21:55:16.205426787 +0000 UTC m=+1.284917358,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.627351 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110ebc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110ebc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893652424 +0000 UTC m=+0.973142975,LastTimestamp:2026-02-26 21:55:16.205452183 +0000 UTC m=+1.284942765,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.633682 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c1111586\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c1111586 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.89366311 +0000 UTC m=+0.973153661,LastTimestamp:2026-02-26 21:55:16.205468504 +0000 UTC m=+1.284959085,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.637635 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110a90f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110a90f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893635343 +0000 UTC m=+0.973125894,LastTimestamp:2026-02-26 21:55:16.206574564 +0000 UTC m=+1.286065135,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.643478 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110ebc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110ebc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893652424 +0000 UTC m=+0.973142975,LastTimestamp:2026-02-26 21:55:16.206619142 +0000 UTC m=+1.286109723,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.648976 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c1111586\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c1111586 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.89366311 +0000 UTC m=+0.973153661,LastTimestamp:2026-02-26 21:55:16.206640866 +0000 UTC m=+1.286131437,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.653120 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110a90f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110a90f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893635343 +0000 UTC m=+0.973125894,LastTimestamp:2026-02-26 21:55:16.208003851 +0000 UTC m=+1.287494432,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.659304 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110ebc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110ebc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893652424 +0000 UTC m=+0.973142975,LastTimestamp:2026-02-26 21:55:16.208028646 +0000 UTC m=+1.287519218,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.665895 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c1111586\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c1111586 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.89366311 +0000 UTC m=+0.973153661,LastTimestamp:2026-02-26 21:55:16.208063569 +0000 UTC m=+1.287554150,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.669115 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110a90f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110a90f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893635343 +0000 UTC m=+0.973125894,LastTimestamp:2026-02-26 21:55:16.209294308 +0000 UTC m=+1.288784889,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.674949 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897ea97c110ebc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897ea97c110ebc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:15.893652424 +0000 UTC m=+0.973142975,LastTimestamp:2026-02-26 21:55:16.209325108 +0000 UTC m=+1.288815689,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.681341 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897ea97ebf6bcee openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:16.613356782 +0000 UTC m=+1.692847353,LastTimestamp:2026-02-26 21:55:16.613356782 +0000 UTC m=+1.692847353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.687458 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea97ebfb395c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:16.61365078 +0000 UTC m=+1.693141351,LastTimestamp:2026-02-26 21:55:16.61365078 +0000 UTC m=+1.693141351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.692825 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea97ec605e94 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:16.620279444 +0000 UTC m=+1.699770025,LastTimestamp:2026-02-26 21:55:16.620279444 +0000 UTC m=+1.699770025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.697904 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea97ec9c4b8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:16.624206732 +0000 UTC m=+1.703697313,LastTimestamp:2026-02-26 21:55:16.624206732 +0000 UTC m=+1.703697313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.701414 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea97ed208103 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:16.632871171 +0000 UTC m=+1.712361762,LastTimestamp:2026-02-26 21:55:16.632871171 +0000 UTC m=+1.712361762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.705658 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea98125b5d8d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.257485709 +0000 UTC m=+2.336976260,LastTimestamp:2026-02-26 21:55:17.257485709 +0000 UTC m=+2.336976260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.711259 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea98125f8a36 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.257759286 +0000 UTC m=+2.337249837,LastTimestamp:2026-02-26 21:55:17.257759286 +0000 UTC m=+2.337249837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.716641 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98126035e5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.257803237 +0000 UTC m=+2.337293788,LastTimestamp:2026-02-26 21:55:17.257803237 +0000 UTC m=+2.337293788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.722426 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897ea9812634302 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.258003202 +0000 UTC m=+2.337493763,LastTimestamp:2026-02-26 21:55:17.258003202 +0000 UTC m=+2.337493763,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.727803 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea981266aae9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.258226409 +0000 UTC m=+2.337716990,LastTimestamp:2026-02-26 21:55:17.258226409 +0000 UTC m=+2.337716990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.733331 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea98131f9f1f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.270347551 +0000 UTC m=+2.349838102,LastTimestamp:2026-02-26 21:55:17.270347551 +0000 UTC m=+2.349838102,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.738653 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea981338b27a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.271990906 +0000 UTC m=+2.351481477,LastTimestamp:2026-02-26 21:55:17.271990906 +0000 UTC m=+2.351481477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.741802 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea981343f36b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.272728427 +0000 UTC m=+2.352218978,LastTimestamp:2026-02-26 21:55:17.272728427 +0000 UTC m=+2.352218978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.745441 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea9813554a5e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.273864798 +0000 UTC m=+2.353355349,LastTimestamp:2026-02-26 21:55:17.273864798 +0000 UTC m=+2.353355349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.751969 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897ea9813592725 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.274117925 +0000 UTC m=+2.353608486,LastTimestamp:2026-02-26 21:55:17.274117925 +0000 UTC m=+2.353608486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.757027 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea98136b9c78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.275327608 +0000 UTC m=+2.354818189,LastTimestamp:2026-02-26 21:55:17.275327608 +0000 UTC m=+2.354818189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.763699 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9825f1f1c5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.586121157 +0000 UTC m=+2.665611738,LastTimestamp:2026-02-26 21:55:17.586121157 +0000 UTC m=+2.665611738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.769657 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9826c00a79 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.599627897 +0000 UTC m=+2.679118478,LastTimestamp:2026-02-26 21:55:17.599627897 +0000 UTC m=+2.679118478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.775587 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9826d72b62 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.60114365 +0000 UTC m=+2.680634231,LastTimestamp:2026-02-26 21:55:17.60114365 +0000 UTC m=+2.680634231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.781560 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea98360256eb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.855631083 +0000 UTC m=+2.935121624,LastTimestamp:2026-02-26 21:55:17.855631083 +0000 UTC m=+2.935121624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.786180 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea98368b27dd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.864597469 +0000 UTC m=+2.944088010,LastTimestamp:2026-02-26 21:55:17.864597469 +0000 UTC m=+2.944088010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.792058 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea98369b312b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.865648427 +0000 UTC m=+2.945138968,LastTimestamp:2026-02-26 21:55:17.865648427 +0000 UTC m=+2.945138968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.799942 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98398c5acd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.915007693 +0000 UTC m=+2.994498234,LastTimestamp:2026-02-26 21:55:17.915007693 +0000 UTC m=+2.994498234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.807776 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897ea9839a05258 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.916316248 +0000 UTC m=+2.995806779,LastTimestamp:2026-02-26 21:55:17.916316248 +0000 UTC m=+2.995806779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: I0226 21:56:20.816479 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.816377 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea9839fa7a5a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.92222473 +0000 UTC m=+3.001715271,LastTimestamp:2026-02-26 21:55:17.92222473 +0000 UTC m=+3.001715271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.820844 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea983a437a2b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.927008811 +0000 UTC m=+3.006499352,LastTimestamp:2026-02-26 21:55:17.927008811 +0000 UTC m=+3.006499352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.827693 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9840b7dee0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.035300064 +0000 UTC m=+3.114790605,LastTimestamp:2026-02-26 21:55:18.035300064 +0000 UTC m=+3.114790605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.833012 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea98415b8a25 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.046026277 +0000 UTC m=+3.125516818,LastTimestamp:2026-02-26 21:55:18.046026277 +0000 UTC m=+3.125516818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.844359 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea9844c7bf63 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.103449443 +0000 UTC m=+3.182939994,LastTimestamp:2026-02-26 21:55:18.103449443 +0000 UTC m=+3.182939994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.849266 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea9844d47968 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.104283496 +0000 UTC m=+3.183774057,LastTimestamp:2026-02-26 21:55:18.104283496 +0000 UTC m=+3.183774057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.853899 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea9845bca7ae openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.119499694 +0000 UTC m=+3.198990235,LastTimestamp:2026-02-26 21:55:18.119499694 +0000 UTC m=+3.198990235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.859771 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea9845cc4dff openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.120525311 +0000 UTC m=+3.200015852,LastTimestamp:2026-02-26 21:55:18.120525311 +0000 UTC m=+3.200015852,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.864374 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea984642b0e3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.128283875 +0000 UTC m=+3.207774416,LastTimestamp:2026-02-26 21:55:18.128283875 +0000 UTC m=+3.207774416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.869500 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897ea9846d9f509 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.138197257 +0000 UTC m=+3.217687798,LastTimestamp:2026-02-26 21:55:18.138197257 +0000 UTC m=+3.217687798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.873873 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea9846ec4d42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.13939949 +0000 UTC m=+3.218890031,LastTimestamp:2026-02-26 21:55:18.13939949 +0000 UTC m=+3.218890031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.878506 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea98471d5dcc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.142614988 +0000 UTC m=+3.222105529,LastTimestamp:2026-02-26 21:55:18.142614988 +0000 UTC m=+3.222105529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.883153 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea98472bc1ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.143558074 +0000 UTC m=+3.223048615,LastTimestamp:2026-02-26 21:55:18.143558074 +0000 UTC m=+3.223048615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.887740 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897ea9848d4f76b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.171424619 +0000 UTC m=+3.250915170,LastTimestamp:2026-02-26 21:55:18.171424619 +0000 UTC m=+3.250915170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.892582 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea9850bdf2a9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.304133801 +0000 UTC m=+3.383624342,LastTimestamp:2026-02-26 21:55:18.304133801 +0000 UTC m=+3.383624342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.896895 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea9850f5eeed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.307802861 +0000 UTC m=+3.387293402,LastTimestamp:2026-02-26 21:55:18.307802861 +0000 UTC m=+3.387293402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.901746 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea9851556573 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.314059123 +0000 UTC m=+3.393549664,LastTimestamp:2026-02-26 21:55:18.314059123 +0000 UTC m=+3.393549664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.906440 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea985164d9e2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.31507197 +0000 UTC m=+3.394562511,LastTimestamp:2026-02-26 21:55:18.31507197 +0000 UTC m=+3.394562511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.911461 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea9851f47abf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.324484799 +0000 UTC m=+3.403975340,LastTimestamp:2026-02-26 21:55:18.324484799 +0000 UTC m=+3.403975340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.916553 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea9851fe0443 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.325109827 +0000 UTC m=+3.404600358,LastTimestamp:2026-02-26 21:55:18.325109827 +0000 UTC m=+3.404600358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.921000 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea985ca0cd23 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.503550243 +0000 UTC m=+3.583040784,LastTimestamp:2026-02-26 21:55:18.503550243 +0000 UTC m=+3.583040784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.925295 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea985cc09ec0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.50563552 +0000 UTC m=+3.585126061,LastTimestamp:2026-02-26 21:55:18.50563552 +0000 UTC m=+3.585126061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.929788 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897ea985d8067b8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.518204344 +0000 UTC m=+3.597694905,LastTimestamp:2026-02-26 21:55:18.518204344 +0000 UTC m=+3.597694905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.934755 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea985da08d35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.520311093 +0000 UTC m=+3.599801634,LastTimestamp:2026-02-26 21:55:18.520311093 +0000 UTC m=+3.599801634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.939221 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea985db322b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.521529016 +0000 UTC m=+3.601019557,LastTimestamp:2026-02-26 21:55:18.521529016 +0000 UTC m=+3.601019557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.943487 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea986ad204ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.741656747 +0000 UTC m=+3.821147298,LastTimestamp:2026-02-26 21:55:18.741656747 +0000 UTC m=+3.821147298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.947693 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea986c3043c0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.764610496 +0000 UTC m=+3.844101057,LastTimestamp:2026-02-26 21:55:18.764610496 +0000 UTC m=+3.844101057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.951778 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea986c42f7f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.76583628 +0000 UTC m=+3.845326841,LastTimestamp:2026-02-26 21:55:18.76583628 +0000 UTC m=+3.845326841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.958322 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98773d26f0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.950004464 +0000 UTC m=+4.029495045,LastTimestamp:2026-02-26 21:55:18.950004464 +0000 UTC m=+4.029495045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.962367 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea988ece5078 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:19.345393784 +0000 UTC m=+4.424884365,LastTimestamp:2026-02-26 21:55:19.345393784 +0000 UTC m=+4.424884365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.967904 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea988ffec25a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:19.365345882 +0000 UTC m=+4.444836463,LastTimestamp:2026-02-26 21:55:19.365345882 +0000 UTC m=+4.444836463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.973415 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea98900fb15a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:19.366455642 +0000 UTC m=+4.445946213,LastTimestamp:2026-02-26 21:55:19.366455642 +0000 UTC m=+4.445946213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.977537 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98911d1d99 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:19.384112537 +0000 UTC m=+4.463603108,LastTimestamp:2026-02-26 21:55:19.384112537 +0000 UTC m=+4.463603108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.983376 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98b3a941da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:19.963722202 +0000 UTC m=+5.043212783,LastTimestamp:2026-02-26 21:55:19.963722202 +0000 UTC m=+5.043212783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.989113 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98c1df46ad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.202143405 +0000 UTC m=+5.281633976,LastTimestamp:2026-02-26 21:55:20.202143405 +0000 UTC m=+5.281633976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.993030 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98c2898691 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.213300881 +0000 UTC m=+5.292791462,LastTimestamp:2026-02-26 21:55:20.213300881 +0000 UTC m=+5.292791462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:20 crc kubenswrapper[4910]: E0226 21:56:20.998557 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98c2a72dc1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.215244225 +0000 UTC m=+5.294734806,LastTimestamp:2026-02-26 21:55:20.215244225 +0000 UTC m=+5.294734806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.001976 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98cff280d9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.438284505 +0000 UTC m=+5.517775076,LastTimestamp:2026-02-26 21:55:20.438284505 +0000 UTC m=+5.517775076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.006721 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98d10c6d06 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.456760582 +0000 UTC m=+5.536251143,LastTimestamp:2026-02-26 21:55:20.456760582 +0000 UTC m=+5.536251143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.012425 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98d11f9237 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.458015287 +0000 UTC m=+5.537505868,LastTimestamp:2026-02-26 21:55:20.458015287 +0000 UTC m=+5.537505868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.016689 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98e112af6a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.72560625 +0000 UTC m=+5.805096831,LastTimestamp:2026-02-26 21:55:20.72560625 +0000 UTC m=+5.805096831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.023235 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98e1f90c3e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.740703294 +0000 UTC m=+5.820193875,LastTimestamp:2026-02-26 21:55:20.740703294 +0000 UTC m=+5.820193875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.027599 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98e20dab41 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.742054721 +0000 UTC m=+5.821545302,LastTimestamp:2026-02-26 21:55:20.742054721 +0000 UTC m=+5.821545302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.033658 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98f1463c60 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:20.997420128 +0000 UTC m=+6.076910709,LastTimestamp:2026-02-26 21:55:20.997420128 +0000 UTC m=+6.076910709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.039729 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98f233e0f8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:21.012994296 +0000 UTC m=+6.092484867,LastTimestamp:2026-02-26 21:55:21.012994296 +0000 UTC m=+6.092484867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.047911 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea98f24b08fb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:21.014511867 +0000 UTC m=+6.094002448,LastTimestamp:2026-02-26 21:55:21.014511867 +0000 UTC m=+6.094002448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.055876 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea99016eee84 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:21.268522628 +0000 UTC m=+6.348013209,LastTimestamp:2026-02-26 21:55:21.268522628 +0000 UTC m=+6.348013209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.064219 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897ea990279f3a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:21.286022048 +0000 UTC m=+6.365512629,LastTimestamp:2026-02-26 21:55:21.286022048 +0000 UTC m=+6.365512629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.077447 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 21:56:21 crc kubenswrapper[4910]: &Event{ObjectMeta:{kube-controller-manager-crc.1897ea9ae7832ef1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 26 21:56:21 crc kubenswrapper[4910]: body: Feb 26 21:56:21 crc kubenswrapper[4910]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:29.423576817 +0000 UTC m=+14.503067398,LastTimestamp:2026-02-26 21:55:29.423576817 +0000 UTC m=+14.503067398,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 21:56:21 crc kubenswrapper[4910]: > Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.083423 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9ae784b79c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:29.42367734 +0000 UTC m=+14.503167911,LastTimestamp:2026-02-26 21:55:29.42367734 +0000 UTC m=+14.503167911,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.089093 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 21:56:21 crc kubenswrapper[4910]: &Event{ObjectMeta:{kube-apiserver-crc.1897ea9b27e3498a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 21:56:21 crc kubenswrapper[4910]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 21:56:21 crc kubenswrapper[4910]: Feb 26 21:56:21 crc kubenswrapper[4910]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:30.503616906 +0000 UTC m=+15.583107447,LastTimestamp:2026-02-26 21:55:30.503616906 +0000 UTC m=+15.583107447,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 21:56:21 crc kubenswrapper[4910]: > Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.095375 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea9b27e45c28 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:30.503687208 +0000 UTC m=+15.583177759,LastTimestamp:2026-02-26 21:55:30.503687208 +0000 UTC m=+15.583177759,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.100948 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897ea9b27e3498a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 21:56:21 crc kubenswrapper[4910]: &Event{ObjectMeta:{kube-apiserver-crc.1897ea9b27e3498a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 21:56:21 crc kubenswrapper[4910]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 21:56:21 crc kubenswrapper[4910]: Feb 26 21:56:21 crc kubenswrapper[4910]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:30.503616906 +0000 UTC m=+15.583107447,LastTimestamp:2026-02-26 21:55:30.51837296 +0000 UTC m=+15.597863541,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 21:56:21 crc kubenswrapper[4910]: > Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.106468 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897ea9b27e45c28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea9b27e45c28 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:30.503687208 +0000 UTC m=+15.583177759,LastTimestamp:2026-02-26 21:55:30.518436812 +0000 UTC m=+15.597927393,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.113912 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 21:56:21 crc kubenswrapper[4910]: &Event{ObjectMeta:{kube-apiserver-crc.1897ea9b43c52d73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 26 21:56:21 crc kubenswrapper[4910]: body: [+]ping ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]log ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]etcd ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/priority-and-fairness-filter ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-apiextensions-informers ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-apiextensions-controllers ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/crd-informer-synced ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-system-namespaces-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 26 21:56:21 crc kubenswrapper[4910]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 26 21:56:21 crc kubenswrapper[4910]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/bootstrap-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/start-kube-aggregator-informers ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/apiservice-registration-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/apiservice-discovery-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]autoregister-completion ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/apiservice-openapi-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 26 21:56:21 crc kubenswrapper[4910]: livez check failed Feb 26 21:56:21 crc kubenswrapper[4910]: Feb 26 21:56:21 crc kubenswrapper[4910]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:30.971405683 +0000 UTC m=+16.050896264,LastTimestamp:2026-02-26 21:55:30.971405683 +0000 UTC m=+16.050896264,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 21:56:21 crc kubenswrapper[4910]: > Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.119839 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea9b43c63dea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:30.971475434 +0000 UTC m=+16.050966015,LastTimestamp:2026-02-26 21:55:30.971475434 +0000 UTC m=+16.050966015,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.126628 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897ea986c42f7f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897ea986c42f7f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:18.76583628 +0000 UTC m=+3.845326841,LastTimestamp:2026-02-26 21:55:31.017231999 +0000 UTC m=+16.096722570,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.136916 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 21:56:21 crc kubenswrapper[4910]: &Event{ObjectMeta:{kube-controller-manager-crc.1897ea9d3b8cb446 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 21:56:21 crc kubenswrapper[4910]: body: Feb 26 21:56:21 crc kubenswrapper[4910]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:39.42342151 +0000 UTC m=+24.502912131,LastTimestamp:2026-02-26 21:55:39.42342151 +0000 UTC m=+24.502912131,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 21:56:21 crc kubenswrapper[4910]: > Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.143577 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9d3b8e6681 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:39.423532673 +0000 UTC m=+24.503023254,LastTimestamp:2026-02-26 21:55:39.423532673 +0000 UTC m=+24.503023254,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.152057 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 21:56:21 crc kubenswrapper[4910]: &Event{ObjectMeta:{kube-controller-manager-crc.1897ea9f6016615f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:54060->192.168.126.11:10357: read: connection reset by peer Feb 26 21:56:21 crc kubenswrapper[4910]: body: Feb 26 21:56:21 crc kubenswrapper[4910]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:48.626358623 +0000 UTC m=+33.705849204,LastTimestamp:2026-02-26 21:55:48.626358623 +0000 UTC m=+33.705849204,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 21:56:21 crc kubenswrapper[4910]: > Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.158815 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9f6017cafc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:54060->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:48.626451196 +0000 UTC m=+33.705941777,LastTimestamp:2026-02-26 21:55:48.626451196 +0000 UTC m=+33.705941777,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.164107 4910 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9f6045f64b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:48.629476939 +0000 UTC m=+33.708967520,LastTimestamp:2026-02-26 21:55:48.629476939 +0000 UTC m=+33.708967520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.170716 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897ea981338b27a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea981338b27a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.271990906 +0000 UTC m=+2.351481477,LastTimestamp:2026-02-26 21:55:48.651258947 +0000 UTC m=+33.730749528,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.176952 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897ea9825f1f1c5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9825f1f1c5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.586121157 +0000 UTC m=+2.665611738,LastTimestamp:2026-02-26 21:55:48.875096649 +0000 UTC m=+33.954587200,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.183365 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897ea9826c00a79\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9826c00a79 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:17.599627897 +0000 UTC m=+2.679118478,LastTimestamp:2026-02-26 21:55:48.885653449 +0000 UTC m=+33.965144000,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.192984 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897ea9d3b8cb446\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 21:56:21 crc kubenswrapper[4910]: &Event{ObjectMeta:{kube-controller-manager-crc.1897ea9d3b8cb446 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 21:56:21 crc kubenswrapper[4910]: body: Feb 26 21:56:21 crc kubenswrapper[4910]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:39.42342151 +0000 UTC m=+24.502912131,LastTimestamp:2026-02-26 21:55:59.423749433 +0000 UTC m=+44.503240014,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 21:56:21 crc kubenswrapper[4910]: > Feb 26 21:56:21 crc kubenswrapper[4910]: I0226 21:56:21.198120 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.200662 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897ea9d3b8e6681\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897ea9d3b8e6681 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:39.423532673 +0000 UTC m=+24.503023254,LastTimestamp:2026-02-26 21:55:59.423824675 +0000 UTC m=+44.503315246,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 21:56:21 crc kubenswrapper[4910]: E0226 21:56:21.203001 4910 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897ea9d3b8cb446\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 21:56:21 crc kubenswrapper[4910]: &Event{ObjectMeta:{kube-controller-manager-crc.1897ea9d3b8cb446 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 21:56:21 crc kubenswrapper[4910]: body: Feb 26 21:56:21 crc kubenswrapper[4910]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 21:55:39.42342151 +0000 UTC m=+24.502912131,LastTimestamp:2026-02-26 21:56:09.423557156 +0000 UTC m=+54.503047737,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 21:56:21 crc kubenswrapper[4910]: > Feb 26 21:56:21 crc kubenswrapper[4910]: I0226 21:56:21.815535 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:21 crc kubenswrapper[4910]: I0226 21:56:21.900714 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:21 crc kubenswrapper[4910]: I0226 21:56:21.902664 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:21 crc kubenswrapper[4910]: I0226 21:56:21.902724 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:21 crc kubenswrapper[4910]: I0226 21:56:21.902745 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:22 crc kubenswrapper[4910]: I0226 21:56:22.816721 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:23 crc kubenswrapper[4910]: W0226 21:56:23.151545 4910 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 26 21:56:23 crc kubenswrapper[4910]: E0226 21:56:23.151677 4910 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 21:56:23 crc kubenswrapper[4910]: I0226 21:56:23.598462 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:56:23 crc kubenswrapper[4910]: I0226 21:56:23.598804 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:23 crc kubenswrapper[4910]: I0226 21:56:23.600400 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:23 crc kubenswrapper[4910]: I0226 21:56:23.600441 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:23 crc kubenswrapper[4910]: I0226 21:56:23.600451 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:23 crc kubenswrapper[4910]: I0226 21:56:23.817981 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:24 crc kubenswrapper[4910]: I0226 21:56:24.820036 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:25 crc kubenswrapper[4910]: I0226 21:56:25.814378 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:25 crc kubenswrapper[4910]: E0226 21:56:25.968334 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 21:56:25 crc kubenswrapper[4910]: I0226 21:56:25.971556 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:25 crc kubenswrapper[4910]: I0226 21:56:25.973049 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:25 crc kubenswrapper[4910]: I0226 21:56:25.973097 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:25 crc kubenswrapper[4910]: I0226 21:56:25.973110 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:25 crc kubenswrapper[4910]: I0226 21:56:25.973143 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:56:25 crc kubenswrapper[4910]: E0226 21:56:25.978308 4910 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 21:56:26 crc kubenswrapper[4910]: E0226 21:56:26.018363 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.423307 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.423575 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.424948 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.424985 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.424995 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.427596 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.813708 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.838061 4910 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.852372 4910 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.901278 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.902269 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.902302 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.902310 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:26 crc kubenswrapper[4910]: I0226 21:56:26.902757 4910 scope.go:117] "RemoveContainer" containerID="94324dea403866bbde2a36c9155f63dd6168b224e9d26ac1cf7af1e4de1d48a6" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.221361 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.223292 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a"} Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.223359 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.223437 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.224060 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.224094 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.224102 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.224369 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.224399 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.224409 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:27 crc kubenswrapper[4910]: I0226 21:56:27.814860 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.227025 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.227606 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.229731 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a" exitCode=255 Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.229779 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a"} Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.229815 4910 scope.go:117] "RemoveContainer" containerID="94324dea403866bbde2a36c9155f63dd6168b224e9d26ac1cf7af1e4de1d48a6" Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.231109 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.235380 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.235447 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.235488 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.237990 4910 scope.go:117] "RemoveContainer" containerID="549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a" Feb 26 21:56:28 crc kubenswrapper[4910]: E0226 21:56:28.241844 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:56:28 crc kubenswrapper[4910]: I0226 21:56:28.814276 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:29 crc kubenswrapper[4910]: I0226 21:56:29.234890 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 21:56:29 crc kubenswrapper[4910]: I0226 21:56:29.817021 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:30 crc kubenswrapper[4910]: I0226 21:56:30.814361 4910 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 21:56:30 crc kubenswrapper[4910]: I0226 21:56:30.898956 4910 csr.go:261] certificate signing request csr-cngdz is approved, waiting to be issued Feb 26 21:56:30 crc kubenswrapper[4910]: I0226 21:56:30.907428 4910 csr.go:257] certificate signing request csr-cngdz is issued Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.010630 4910 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.643976 4910 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.691642 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.691878 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.693541 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.693598 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.693615 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.694614 4910 scope.go:117] "RemoveContainer" containerID="549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a" Feb 26 21:56:31 crc kubenswrapper[4910]: E0226 21:56:31.694960 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.909727 4910 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-13 18:38:52.874375227 +0000 UTC Feb 26 21:56:31 crc kubenswrapper[4910]: I0226 21:56:31.909785 4910 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7700h42m20.964595321s for next certificate rotation Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.978486 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.980390 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.980476 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.980505 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.980673 4910 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.991432 4910 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.991825 4910 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 26 21:56:32 crc kubenswrapper[4910]: E0226 21:56:32.991867 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.996457 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.996509 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.996526 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.996551 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:32 crc kubenswrapper[4910]: I0226 21:56:32.996569 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:32Z","lastTransitionTime":"2026-02-26T21:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.016915 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.024828 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.024890 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.024908 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.024936 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.024956 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:33Z","lastTransitionTime":"2026-02-26T21:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.037198 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.049398 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.049532 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.049556 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.049580 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.049598 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:33Z","lastTransitionTime":"2026-02-26T21:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.065766 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.077875 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.077955 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.077978 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.078013 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.078037 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:33Z","lastTransitionTime":"2026-02-26T21:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.094450 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.094673 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.094713 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.195051 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.295939 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.396756 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.497580 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.597725 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.604323 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.604522 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.605965 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.606019 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:33 crc kubenswrapper[4910]: I0226 21:56:33.606042 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.698402 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.799456 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:33 crc kubenswrapper[4910]: E0226 21:56:33.900454 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.001404 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.102014 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.202230 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.303361 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.404100 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.505265 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.606150 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.706534 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.807123 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:34 crc kubenswrapper[4910]: E0226 21:56:34.908323 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.009045 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.109560 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.209854 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.310945 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.412025 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.512148 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.612615 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.713642 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.814070 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:35 crc kubenswrapper[4910]: E0226 21:56:35.915221 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.015334 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.018610 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.115702 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.216050 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.316786 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.416923 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.517804 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.619084 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.719952 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.820135 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:36 crc kubenswrapper[4910]: E0226 21:56:36.921281 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.021473 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.122410 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.223033 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.323299 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.423473 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.524308 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.625335 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.725924 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.826387 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:37 crc kubenswrapper[4910]: E0226 21:56:37.926626 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.027768 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.128733 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.229672 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: I0226 21:56:38.235051 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:56:38 crc kubenswrapper[4910]: I0226 21:56:38.235445 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:38 crc kubenswrapper[4910]: I0226 21:56:38.237029 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:38 crc kubenswrapper[4910]: I0226 21:56:38.237082 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:38 crc kubenswrapper[4910]: I0226 21:56:38.237099 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:38 crc kubenswrapper[4910]: I0226 21:56:38.237955 4910 scope.go:117] "RemoveContainer" containerID="549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.238346 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.330698 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.431827 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.532984 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.633960 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.734906 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.835664 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:38 crc kubenswrapper[4910]: E0226 21:56:38.935896 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.036779 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.137877 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.238802 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.339597 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.439761 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.540585 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.641369 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.741736 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.841965 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:39 crc kubenswrapper[4910]: E0226 21:56:39.942647 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.043776 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.144755 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.246243 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.346835 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.447020 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.547876 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.648568 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.749326 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.849958 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:40 crc kubenswrapper[4910]: E0226 21:56:40.951038 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.051905 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.153054 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.253300 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.353504 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.454469 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.555322 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.656057 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: I0226 21:56:41.689654 4910 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.756699 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.857498 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:41 crc kubenswrapper[4910]: E0226 21:56:41.958476 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.059609 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.160705 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.261328 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.362345 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.463454 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.564072 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.664501 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.765666 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.866253 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:42 crc kubenswrapper[4910]: E0226 21:56:42.967425 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.067902 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.168550 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.269439 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.370675 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.431962 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.437885 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.437942 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.437959 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.437985 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.438004 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:43Z","lastTransitionTime":"2026-02-26T21:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.458325 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.462986 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.463033 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.463059 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.463088 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.463111 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:43Z","lastTransitionTime":"2026-02-26T21:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.479683 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.484139 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.484217 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.484234 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.484253 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.484267 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:43Z","lastTransitionTime":"2026-02-26T21:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.499720 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.506103 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.506224 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.506276 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.506312 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:43 crc kubenswrapper[4910]: I0226 21:56:43.506338 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:43Z","lastTransitionTime":"2026-02-26T21:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.524342 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.524566 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.524606 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.625097 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.725596 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.826149 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:43 crc kubenswrapper[4910]: E0226 21:56:43.927102 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.027454 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.127824 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.228563 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.329694 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.429851 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: I0226 21:56:44.430826 4910 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.530816 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.631797 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.732879 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.833270 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:44 crc kubenswrapper[4910]: I0226 21:56:44.901069 4910 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 21:56:44 crc kubenswrapper[4910]: I0226 21:56:44.902149 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:44 crc kubenswrapper[4910]: I0226 21:56:44.902225 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:44 crc kubenswrapper[4910]: I0226 21:56:44.902242 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:44 crc kubenswrapper[4910]: E0226 21:56:44.934337 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.035227 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.135346 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.236630 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.337235 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.437646 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.537978 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.638556 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.739045 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.839523 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:45 crc kubenswrapper[4910]: E0226 21:56:45.939890 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.018876 4910 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.040373 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.141389 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.241800 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.342854 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.443050 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.543258 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.644302 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.744529 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.844786 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:46 crc kubenswrapper[4910]: E0226 21:56:46.945269 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.045816 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.146359 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.246475 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.347129 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.448215 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.549108 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.650196 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.750659 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.851401 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:47 crc kubenswrapper[4910]: E0226 21:56:47.952120 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.052317 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.153378 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.253913 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.354800 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.455305 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.555831 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.656826 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.757600 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.858387 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:48 crc kubenswrapper[4910]: E0226 21:56:48.958809 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.059322 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.159500 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.260195 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.361124 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.461864 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.562660 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.663483 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.764476 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.865215 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:49 crc kubenswrapper[4910]: E0226 21:56:49.965892 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.066416 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.166563 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.266962 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.367796 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.468869 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.570017 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.670590 4910 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.764851 4910 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.773387 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.773434 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.773448 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.773464 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.773476 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:50Z","lastTransitionTime":"2026-02-26T21:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.849977 4910 apiserver.go:52] "Watching apiserver" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.854987 4910 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.855460 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.856252 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.856441 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.856526 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.856679 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.856258 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.856905 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.857538 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.857666 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.857750 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.858278 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.858613 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.859468 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.859662 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.859801 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.859991 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.860037 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.860150 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.860827 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.875758 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.875823 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.875838 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.875855 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.875865 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:50Z","lastTransitionTime":"2026-02-26T21:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.891838 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.909635 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.918549 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.919336 4910 scope.go:117] "RemoveContainer" containerID="549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.919675 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.923247 4910 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.926632 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930601 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930655 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930693 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930729 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930764 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930798 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930828 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930859 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930891 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930922 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930952 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.930984 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931014 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931043 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931052 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931076 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931108 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931139 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931196 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931239 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931272 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931301 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931330 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931359 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931386 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931413 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931439 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931471 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931500 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931527 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931556 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931589 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931623 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931655 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931687 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931719 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931750 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931776 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931804 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931832 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931859 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931892 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931925 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931958 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931990 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932021 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932097 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932128 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932220 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932252 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932287 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932321 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932353 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932385 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932414 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932444 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932479 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932511 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932545 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932578 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931155 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931510 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.931922 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932235 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.932273 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.933565 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.933806 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.933886 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.933883 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.933932 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.933958 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934117 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934126 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934314 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934302 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934464 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934649 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934660 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934741 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934802 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.934986 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.935311 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.935398 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.935641 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.935737 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.935914 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.935953 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.936120 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.936362 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.936435 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.936586 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.936651 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.936782 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.936823 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.936961 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.937030 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.937074 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.937259 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.937356 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.938361 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.938666 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.938770 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.939449 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.939464 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.939537 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.939807 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940239 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940277 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940307 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940337 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940370 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940397 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940457 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940482 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940510 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940544 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940571 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940598 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940644 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940668 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940696 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940726 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940900 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940956 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940991 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941024 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941051 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941305 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941365 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941401 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941434 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941458 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941462 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941486 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941516 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941540 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941568 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941597 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.940087 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941629 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941674 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941711 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941752 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941788 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941828 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941868 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941903 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.941987 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942029 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942105 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942141 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942202 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942246 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942280 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942328 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942373 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942410 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942455 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942500 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942544 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942581 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942624 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942664 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942778 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942824 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942911 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942979 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943010 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943085 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943211 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943285 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943321 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943382 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943411 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943470 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943499 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943629 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944260 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944338 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944429 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944505 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944580 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944676 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942019 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942258 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942333 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942441 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942601 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.942587 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943355 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.943500 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944607 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944666 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944813 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.944926 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945390 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945458 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945440 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945501 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945535 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945548 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945552 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945677 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945711 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945732 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945737 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945771 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945783 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945884 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945917 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945950 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945969 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.945983 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.946058 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.946215 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.946258 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.946295 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.946325 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.946436 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.946559 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.946837 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.946354 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.947085 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.947202 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.947235 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:56:51.447153736 +0000 UTC m=+96.526644297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.947858 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.948598 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.949353 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.950016 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.950321 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.949739 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.950652 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.950984 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.949435 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951055 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951050 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951082 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951090 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951108 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951133 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951175 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951197 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951222 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951244 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951268 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951291 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951314 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951339 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951361 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951395 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951417 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951439 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951461 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951482 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951507 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951528 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951549 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951570 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951592 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951613 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951635 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951658 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951726 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951749 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951771 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951791 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951811 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951833 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951857 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951880 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951903 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951935 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951966 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952415 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952441 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952461 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952484 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952506 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952623 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952683 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952868 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952944 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952984 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953009 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953034 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953058 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953084 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953111 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953136 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953178 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953633 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953696 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953768 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953783 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953796 4910 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953809 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953822 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953834 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953847 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953860 4910 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953874 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953886 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953899 4910 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953911 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953924 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953936 4910 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953948 4910 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953962 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953974 4910 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953986 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953998 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954010 4910 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954022 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954035 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954049 4910 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954060 4910 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954073 4910 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954086 4910 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954098 4910 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954111 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954206 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954219 4910 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954231 4910 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954245 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954258 4910 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954274 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955027 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955042 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955057 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955072 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955085 4910 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955098 4910 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955110 4910 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955121 4910 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955133 4910 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955146 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955186 4910 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955202 4910 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955214 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955227 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955239 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955251 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955263 4910 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955276 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955288 4910 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955300 4910 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955313 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955325 4910 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955338 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955349 4910 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955362 4910 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955374 4910 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955388 4910 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955401 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955414 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955427 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955440 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955452 4910 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955465 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955477 4910 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955490 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955502 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955515 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955527 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955539 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955552 4910 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955564 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955576 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955589 4910 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955601 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955614 4910 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955628 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955641 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955345 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.956094 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951061 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951138 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951224 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951245 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951270 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951271 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.951668 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952408 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.952803 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953222 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953410 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.956456 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.956611 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.956638 4910 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.956700 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.956716 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.956752 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.956967 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953951 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953779 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954198 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954228 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954431 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.954608 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955078 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955008 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955425 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.955543 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.957590 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:51.457571782 +0000 UTC m=+96.537062313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.957630 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955643 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955822 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.955892 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.956263 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.957148 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.958277 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.958396 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.958739 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.958802 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:51.458783296 +0000 UTC m=+96.538273947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.958803 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.959230 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.959645 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.959716 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.959803 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.959881 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.959889 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.960086 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.960265 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.960758 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.961233 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.961408 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.961578 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.961938 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.962073 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.962113 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.962371 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.962566 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.963079 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.963897 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.964121 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.965409 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.965424 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.965506 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.965871 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.965995 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.966286 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.966450 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.966858 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.967472 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.953614 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.967817 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.967939 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.969203 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.969433 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.969686 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.969703 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.969981 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.970275 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.969443 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.970397 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.970702 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.970880 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.971450 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.971461 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.971474 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.971474 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.971529 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.972130 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.972781 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.972841 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.972867 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.972983 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:51.472949625 +0000 UTC m=+96.552440226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.977755 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.977869 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.977897 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.981931 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982015 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982191 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982311 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982312 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982345 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982356 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982371 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982369 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982390 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:50Z","lastTransitionTime":"2026-02-26T21:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982396 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982843 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.982935 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.983094 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.983269 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.983494 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.983654 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.984063 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.984078 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.984293 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.985112 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.985346 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.985758 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.986185 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.986434 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.986775 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.988562 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.988756 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.988830 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.988925 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.990287 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.990651 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.990675 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.990688 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:50 crc kubenswrapper[4910]: E0226 21:56:50.990740 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:51.490723203 +0000 UTC m=+96.570213754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.993296 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.993457 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.993704 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.995296 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.997027 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 21:56:50 crc kubenswrapper[4910]: I0226 21:56:50.997232 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.001355 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.001428 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.008569 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.019153 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.056935 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.056973 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057038 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057049 4910 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057058 4910 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057067 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057075 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057083 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057092 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057101 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057109 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057117 4910 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057126 4910 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057135 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057143 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057151 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057174 4910 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057182 4910 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057190 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057198 4910 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057208 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057218 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057229 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057239 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057266 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057278 4910 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057309 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057319 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057331 4910 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057350 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057360 4910 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057371 4910 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057382 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057395 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057402 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057407 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057449 4910 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057457 4910 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057467 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057479 4910 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057487 4910 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057496 4910 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057505 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057512 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057521 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057529 4910 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057536 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057544 4910 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057552 4910 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057560 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057568 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057325 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057575 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057618 4910 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057632 4910 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057643 4910 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057654 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057667 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057678 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057691 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057703 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057714 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057726 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057738 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057749 4910 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057760 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057769 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057779 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057792 4910 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057803 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057814 4910 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057825 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057836 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057846 4910 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057856 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057867 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057878 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057888 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057898 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057908 4910 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057919 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057930 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057940 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057950 4910 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057962 4910 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057973 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057984 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.057995 4910 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058005 4910 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058016 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058028 4910 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058039 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058050 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058060 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058070 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058081 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058093 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058105 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058115 4910 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058125 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058135 4910 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058146 4910 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058176 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058189 4910 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058200 4910 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058211 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058224 4910 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058235 4910 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058246 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058257 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058267 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058279 4910 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058290 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058301 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058312 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058322 4910 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.058333 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.085464 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.085509 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.085527 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.085551 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.085569 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:51Z","lastTransitionTime":"2026-02-26T21:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.182941 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.188482 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.188534 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.188558 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.188590 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.188613 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:51Z","lastTransitionTime":"2026-02-26T21:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.197500 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.207123 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:51 crc kubenswrapper[4910]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 26 21:56:51 crc kubenswrapper[4910]: set -o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 26 21:56:51 crc kubenswrapper[4910]: source /etc/kubernetes/apiserver-url.env Feb 26 21:56:51 crc kubenswrapper[4910]: else Feb 26 21:56:51 crc kubenswrapper[4910]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 26 21:56:51 crc kubenswrapper[4910]: exit 1 Feb 26 21:56:51 crc kubenswrapper[4910]: fi Feb 26 21:56:51 crc kubenswrapper[4910]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 26 21:56:51 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:51 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.209259 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.210351 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 21:56:51 crc kubenswrapper[4910]: W0226 21:56:51.212384 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-6dc63de74beb12b22ae826c86483adb01e8ce8bdbb3dc985d3d07c6eae67cde3 WatchSource:0}: Error finding container 6dc63de74beb12b22ae826c86483adb01e8ce8bdbb3dc985d3d07c6eae67cde3: Status 404 returned error can't find the container with id 6dc63de74beb12b22ae826c86483adb01e8ce8bdbb3dc985d3d07c6eae67cde3 Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.216286 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:51 crc kubenswrapper[4910]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 21:56:51 crc kubenswrapper[4910]: if [[ -f "/env/_master" ]]; then Feb 26 21:56:51 crc kubenswrapper[4910]: set -o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: source "/env/_master" Feb 26 21:56:51 crc kubenswrapper[4910]: set +o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: fi Feb 26 21:56:51 crc kubenswrapper[4910]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 26 21:56:51 crc kubenswrapper[4910]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 26 21:56:51 crc kubenswrapper[4910]: ho_enable="--enable-hybrid-overlay" Feb 26 21:56:51 crc kubenswrapper[4910]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 26 21:56:51 crc kubenswrapper[4910]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 26 21:56:51 crc kubenswrapper[4910]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 26 21:56:51 crc kubenswrapper[4910]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 21:56:51 crc kubenswrapper[4910]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 26 21:56:51 crc kubenswrapper[4910]: --webhook-host=127.0.0.1 \ Feb 26 21:56:51 crc kubenswrapper[4910]: --webhook-port=9743 \ Feb 26 21:56:51 crc kubenswrapper[4910]: ${ho_enable} \ Feb 26 21:56:51 crc kubenswrapper[4910]: --enable-interconnect \ Feb 26 21:56:51 crc kubenswrapper[4910]: --disable-approver \ Feb 26 21:56:51 crc kubenswrapper[4910]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 26 21:56:51 crc kubenswrapper[4910]: --wait-for-kubernetes-api=200s \ Feb 26 21:56:51 crc kubenswrapper[4910]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 26 21:56:51 crc kubenswrapper[4910]: --loglevel="${LOGLEVEL}" Feb 26 21:56:51 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:51 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.219348 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:51 crc kubenswrapper[4910]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 21:56:51 crc kubenswrapper[4910]: if [[ -f "/env/_master" ]]; then Feb 26 21:56:51 crc kubenswrapper[4910]: set -o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: source "/env/_master" Feb 26 21:56:51 crc kubenswrapper[4910]: set +o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: fi Feb 26 21:56:51 crc kubenswrapper[4910]: Feb 26 21:56:51 crc kubenswrapper[4910]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 26 21:56:51 crc kubenswrapper[4910]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 21:56:51 crc kubenswrapper[4910]: --disable-webhook \ Feb 26 21:56:51 crc kubenswrapper[4910]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 26 21:56:51 crc kubenswrapper[4910]: --loglevel="${LOGLEVEL}" Feb 26 21:56:51 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:51 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.220502 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 26 21:56:51 crc kubenswrapper[4910]: W0226 21:56:51.224835 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-fb0f4f0f153c6b1ee0e26ee7548f1a5ca97abc3da27c811e762b717210e7b773 WatchSource:0}: Error finding container fb0f4f0f153c6b1ee0e26ee7548f1a5ca97abc3da27c811e762b717210e7b773: Status 404 returned error can't find the container with id fb0f4f0f153c6b1ee0e26ee7548f1a5ca97abc3da27c811e762b717210e7b773 Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.231337 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.232521 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.291600 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.291678 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.291699 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.291727 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.291745 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:51Z","lastTransitionTime":"2026-02-26T21:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.304213 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fb0f4f0f153c6b1ee0e26ee7548f1a5ca97abc3da27c811e762b717210e7b773"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.305982 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6dc63de74beb12b22ae826c86483adb01e8ce8bdbb3dc985d3d07c6eae67cde3"} Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.306378 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.307716 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.307911 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:51 crc kubenswrapper[4910]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 21:56:51 crc kubenswrapper[4910]: if [[ -f "/env/_master" ]]; then Feb 26 21:56:51 crc kubenswrapper[4910]: set -o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: source "/env/_master" Feb 26 21:56:51 crc kubenswrapper[4910]: set +o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: fi Feb 26 21:56:51 crc kubenswrapper[4910]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 26 21:56:51 crc kubenswrapper[4910]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 26 21:56:51 crc kubenswrapper[4910]: ho_enable="--enable-hybrid-overlay" Feb 26 21:56:51 crc kubenswrapper[4910]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 26 21:56:51 crc kubenswrapper[4910]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 26 21:56:51 crc kubenswrapper[4910]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 26 21:56:51 crc kubenswrapper[4910]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 21:56:51 crc kubenswrapper[4910]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 26 21:56:51 crc kubenswrapper[4910]: --webhook-host=127.0.0.1 \ Feb 26 21:56:51 crc kubenswrapper[4910]: --webhook-port=9743 \ Feb 26 21:56:51 crc kubenswrapper[4910]: ${ho_enable} \ Feb 26 21:56:51 crc kubenswrapper[4910]: --enable-interconnect \ Feb 26 21:56:51 crc kubenswrapper[4910]: --disable-approver \ Feb 26 21:56:51 crc kubenswrapper[4910]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 26 21:56:51 crc kubenswrapper[4910]: --wait-for-kubernetes-api=200s \ Feb 26 21:56:51 crc kubenswrapper[4910]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 26 21:56:51 crc kubenswrapper[4910]: --loglevel="${LOGLEVEL}" Feb 26 21:56:51 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:51 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.309261 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b433706e30c0f0d6f61004bb6c407ae0b6195f8c99916a41d77eed519bdd8877"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.309962 4910 scope.go:117] "RemoveContainer" containerID="549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.310221 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.312090 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:51 crc kubenswrapper[4910]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 26 21:56:51 crc kubenswrapper[4910]: set -o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 26 21:56:51 crc kubenswrapper[4910]: source /etc/kubernetes/apiserver-url.env Feb 26 21:56:51 crc kubenswrapper[4910]: else Feb 26 21:56:51 crc kubenswrapper[4910]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 26 21:56:51 crc kubenswrapper[4910]: exit 1 Feb 26 21:56:51 crc kubenswrapper[4910]: fi Feb 26 21:56:51 crc kubenswrapper[4910]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 26 21:56:51 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:51 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.312527 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:51 crc kubenswrapper[4910]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 21:56:51 crc kubenswrapper[4910]: if [[ -f "/env/_master" ]]; then Feb 26 21:56:51 crc kubenswrapper[4910]: set -o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: source "/env/_master" Feb 26 21:56:51 crc kubenswrapper[4910]: set +o allexport Feb 26 21:56:51 crc kubenswrapper[4910]: fi Feb 26 21:56:51 crc kubenswrapper[4910]: Feb 26 21:56:51 crc kubenswrapper[4910]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 26 21:56:51 crc kubenswrapper[4910]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 21:56:51 crc kubenswrapper[4910]: --disable-webhook \ Feb 26 21:56:51 crc kubenswrapper[4910]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 26 21:56:51 crc kubenswrapper[4910]: --loglevel="${LOGLEVEL}" Feb 26 21:56:51 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:51 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.314012 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.314021 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.322834 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.335659 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.348412 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.358579 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.372820 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.387764 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.394797 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.394862 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.394875 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.394895 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.394908 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:51Z","lastTransitionTime":"2026-02-26T21:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.399930 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.411611 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.422814 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.436381 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.453070 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.462745 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.462915 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.462963 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.463022 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:56:52.462982541 +0000 UTC m=+97.542473122 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.463052 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.463120 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:52.463098004 +0000 UTC m=+97.542588575 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.463263 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.463325 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:52.4633105 +0000 UTC m=+97.542801071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.469013 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.484336 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.496535 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.498365 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.498454 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.498478 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.498510 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.498539 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:51Z","lastTransitionTime":"2026-02-26T21:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.564085 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.564229 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.564344 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.564405 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.564413 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.564425 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.564451 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.564470 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.564536 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:52.56450153 +0000 UTC m=+97.643992101 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:51 crc kubenswrapper[4910]: E0226 21:56:51.564566 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:52.564554682 +0000 UTC m=+97.644045253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.601349 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.601400 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.601413 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.601431 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.601447 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:51Z","lastTransitionTime":"2026-02-26T21:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.707383 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.707446 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.707472 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.707498 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.707516 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:51Z","lastTransitionTime":"2026-02-26T21:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.810507 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.810580 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.810600 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.810625 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.810645 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:51Z","lastTransitionTime":"2026-02-26T21:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.906656 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.907634 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.908663 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.909668 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.910552 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.912084 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.913614 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.913655 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.913671 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.913695 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.913712 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:51Z","lastTransitionTime":"2026-02-26T21:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.913734 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.915050 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.917401 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.918724 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.921013 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.922817 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.924212 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.925371 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.927267 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.928334 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.930493 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.931299 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.932482 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.934458 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.935440 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.937063 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.937535 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.938633 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.939085 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.939721 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.941717 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.942947 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.944271 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.945994 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.946924 4910 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.947101 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.951328 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.952463 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.953398 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.955822 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.957368 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.958455 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.959770 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.962534 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.963792 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.965968 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.968026 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.969694 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.971624 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.972988 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.974900 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.977648 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.978666 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.980389 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.981656 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.983010 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.985454 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 26 21:56:51 crc kubenswrapper[4910]: I0226 21:56:51.986795 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.016463 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.016516 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.016528 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.016547 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.016559 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.119761 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.119831 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.119850 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.119875 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.119897 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.222319 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.222423 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.222444 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.222469 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.222486 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.325019 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.325079 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.325102 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.325133 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.325196 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.428432 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.428493 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.428510 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.428534 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.428550 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.473432 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.473630 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:56:54.473592893 +0000 UTC m=+99.553083474 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.473752 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.473815 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.473947 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.473991 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.474051 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:54.474027175 +0000 UTC m=+99.553517756 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.474084 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:54.474067856 +0000 UTC m=+99.553558437 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.531418 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.531460 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.531472 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.531489 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.531501 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.575268 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.575336 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.575493 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.575546 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.575568 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.575651 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:54.575622446 +0000 UTC m=+99.655113027 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.575493 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.575717 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.575746 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.575834 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:54.575808112 +0000 UTC m=+99.655298693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.633835 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.633892 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.633908 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.633935 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.633958 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.736467 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.736526 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.736545 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.736569 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.736589 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.840239 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.840299 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.840317 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.840340 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.840356 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.901389 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.901444 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.901528 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.901811 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.901932 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:56:52 crc kubenswrapper[4910]: E0226 21:56:52.902090 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.942420 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.942470 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.942482 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.942499 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:52 crc kubenswrapper[4910]: I0226 21:56:52.942511 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:52Z","lastTransitionTime":"2026-02-26T21:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.045539 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.045855 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.045968 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.046057 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.046147 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.149351 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.149704 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.149840 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.149976 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.150109 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.252613 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.253280 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.253428 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.253540 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.253632 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.355867 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.356103 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.356212 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.356325 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.356420 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.459460 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.459501 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.459510 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.459525 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.459535 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.562854 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.562898 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.562917 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.562943 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.562966 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.653051 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.653099 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.653116 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.653138 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.653154 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: E0226 21:56:53.667915 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.673291 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.673353 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.673372 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.673396 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.673418 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: E0226 21:56:53.690673 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.695724 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.695941 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.696056 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.696219 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.696402 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: E0226 21:56:53.713626 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.719396 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.719609 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.719711 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.719815 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.719916 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: E0226 21:56:53.735405 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.740319 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.740399 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.740423 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.740448 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.740467 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: E0226 21:56:53.753870 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:53 crc kubenswrapper[4910]: E0226 21:56:53.754342 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.756500 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.756661 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.756745 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.756870 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.756995 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.860326 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.860369 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.860381 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.860401 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.860413 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.962677 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.962719 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.962730 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.962748 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:53 crc kubenswrapper[4910]: I0226 21:56:53.962760 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:53Z","lastTransitionTime":"2026-02-26T21:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.073623 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.073692 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.073710 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.073736 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.073756 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:54Z","lastTransitionTime":"2026-02-26T21:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.176434 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.176847 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.177018 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.177235 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.177427 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:54Z","lastTransitionTime":"2026-02-26T21:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.281381 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.281458 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.281482 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.281510 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.281533 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:54Z","lastTransitionTime":"2026-02-26T21:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.384405 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.384450 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.384465 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.384481 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.384492 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:54Z","lastTransitionTime":"2026-02-26T21:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.487746 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.487973 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.488054 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.488203 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.488299 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:54Z","lastTransitionTime":"2026-02-26T21:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.490340 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.490493 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.490565 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.490714 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.490798 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.490719 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:56:58.490695723 +0000 UTC m=+103.570186314 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.491088 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:58.491016772 +0000 UTC m=+103.570507353 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.491332 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:58.491224558 +0000 UTC m=+103.570715129 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.591022 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.591136 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.591222 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.591287 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.591303 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.591303 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.591329 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.591339 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.591359 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.591349 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:54Z","lastTransitionTime":"2026-02-26T21:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.591391 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.591424 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:58.59140098 +0000 UTC m=+103.670891561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.591425 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.591458 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.591555 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 21:56:58.591527764 +0000 UTC m=+103.671018355 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.693779 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.693821 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.693835 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.693853 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.693868 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:54Z","lastTransitionTime":"2026-02-26T21:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.796568 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.796633 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.796651 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.796676 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.796697 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:54Z","lastTransitionTime":"2026-02-26T21:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.899328 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.899389 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.899405 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.899430 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.899450 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:54Z","lastTransitionTime":"2026-02-26T21:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.900597 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.900740 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.900603 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.900864 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.900602 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:54 crc kubenswrapper[4910]: E0226 21:56:54.900957 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.993857 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-m5cf2"] Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.994600 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m5cf2" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.997846 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.997859 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 21:56:54 crc kubenswrapper[4910]: I0226 21:56:54.998222 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.002391 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.002468 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.002497 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.002529 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.002554 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.012967 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.030183 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.044482 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.057383 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.067918 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.079762 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.090658 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.094712 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5680be55-6cf7-4a72-a5b8-4b49efe4a020-hosts-file\") pod \"node-resolver-m5cf2\" (UID: \"5680be55-6cf7-4a72-a5b8-4b49efe4a020\") " pod="openshift-dns/node-resolver-m5cf2" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.094894 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8555\" (UniqueName: \"kubernetes.io/projected/5680be55-6cf7-4a72-a5b8-4b49efe4a020-kube-api-access-f8555\") pod \"node-resolver-m5cf2\" (UID: \"5680be55-6cf7-4a72-a5b8-4b49efe4a020\") " pod="openshift-dns/node-resolver-m5cf2" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.100850 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.104919 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.104969 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.104987 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.105011 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.105030 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.196398 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5680be55-6cf7-4a72-a5b8-4b49efe4a020-hosts-file\") pod \"node-resolver-m5cf2\" (UID: \"5680be55-6cf7-4a72-a5b8-4b49efe4a020\") " pod="openshift-dns/node-resolver-m5cf2" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.196459 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8555\" (UniqueName: \"kubernetes.io/projected/5680be55-6cf7-4a72-a5b8-4b49efe4a020-kube-api-access-f8555\") pod \"node-resolver-m5cf2\" (UID: \"5680be55-6cf7-4a72-a5b8-4b49efe4a020\") " pod="openshift-dns/node-resolver-m5cf2" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.196708 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5680be55-6cf7-4a72-a5b8-4b49efe4a020-hosts-file\") pod \"node-resolver-m5cf2\" (UID: \"5680be55-6cf7-4a72-a5b8-4b49efe4a020\") " pod="openshift-dns/node-resolver-m5cf2" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.207379 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.207434 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.207450 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.207474 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.207492 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.224220 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8555\" (UniqueName: \"kubernetes.io/projected/5680be55-6cf7-4a72-a5b8-4b49efe4a020-kube-api-access-f8555\") pod \"node-resolver-m5cf2\" (UID: \"5680be55-6cf7-4a72-a5b8-4b49efe4a020\") " pod="openshift-dns/node-resolver-m5cf2" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.310874 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.310936 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.310953 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.310978 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.310997 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.315387 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m5cf2" Feb 26 21:56:55 crc kubenswrapper[4910]: W0226 21:56:55.333195 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5680be55_6cf7_4a72_a5b8_4b49efe4a020.slice/crio-bdc0e3974c9d881b21e811625ca0bb354a91628b60037d1a80ad3c361a2092c7 WatchSource:0}: Error finding container bdc0e3974c9d881b21e811625ca0bb354a91628b60037d1a80ad3c361a2092c7: Status 404 returned error can't find the container with id bdc0e3974c9d881b21e811625ca0bb354a91628b60037d1a80ad3c361a2092c7 Feb 26 21:56:55 crc kubenswrapper[4910]: E0226 21:56:55.337370 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:55 crc kubenswrapper[4910]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 26 21:56:55 crc kubenswrapper[4910]: set -uo pipefail Feb 26 21:56:55 crc kubenswrapper[4910]: Feb 26 21:56:55 crc kubenswrapper[4910]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 26 21:56:55 crc kubenswrapper[4910]: Feb 26 21:56:55 crc kubenswrapper[4910]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 26 21:56:55 crc kubenswrapper[4910]: HOSTS_FILE="/etc/hosts" Feb 26 21:56:55 crc kubenswrapper[4910]: TEMP_FILE="/etc/hosts.tmp" Feb 26 21:56:55 crc kubenswrapper[4910]: Feb 26 21:56:55 crc kubenswrapper[4910]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 26 21:56:55 crc kubenswrapper[4910]: Feb 26 21:56:55 crc kubenswrapper[4910]: # Make a temporary file with the old hosts file's attributes. Feb 26 21:56:55 crc kubenswrapper[4910]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 26 21:56:55 crc kubenswrapper[4910]: echo "Failed to preserve hosts file. Exiting." Feb 26 21:56:55 crc kubenswrapper[4910]: exit 1 Feb 26 21:56:55 crc kubenswrapper[4910]: fi Feb 26 21:56:55 crc kubenswrapper[4910]: Feb 26 21:56:55 crc kubenswrapper[4910]: while true; do Feb 26 21:56:55 crc kubenswrapper[4910]: declare -A svc_ips Feb 26 21:56:55 crc kubenswrapper[4910]: for svc in "${services[@]}"; do Feb 26 21:56:55 crc kubenswrapper[4910]: # Fetch service IP from cluster dns if present. We make several tries Feb 26 21:56:55 crc kubenswrapper[4910]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 26 21:56:55 crc kubenswrapper[4910]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 26 21:56:55 crc kubenswrapper[4910]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 26 21:56:55 crc kubenswrapper[4910]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 21:56:55 crc kubenswrapper[4910]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 21:56:55 crc kubenswrapper[4910]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 21:56:55 crc kubenswrapper[4910]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 26 21:56:55 crc kubenswrapper[4910]: for i in ${!cmds[*]} Feb 26 21:56:55 crc kubenswrapper[4910]: do Feb 26 21:56:55 crc kubenswrapper[4910]: ips=($(eval "${cmds[i]}")) Feb 26 21:56:55 crc kubenswrapper[4910]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 26 21:56:55 crc kubenswrapper[4910]: svc_ips["${svc}"]="${ips[@]}" Feb 26 21:56:55 crc kubenswrapper[4910]: break Feb 26 21:56:55 crc kubenswrapper[4910]: fi Feb 26 21:56:55 crc kubenswrapper[4910]: done Feb 26 21:56:55 crc kubenswrapper[4910]: done Feb 26 21:56:55 crc kubenswrapper[4910]: Feb 26 21:56:55 crc kubenswrapper[4910]: # Update /etc/hosts only if we get valid service IPs Feb 26 21:56:55 crc kubenswrapper[4910]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 26 21:56:55 crc kubenswrapper[4910]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 26 21:56:55 crc kubenswrapper[4910]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 26 21:56:55 crc kubenswrapper[4910]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 26 21:56:55 crc kubenswrapper[4910]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 26 21:56:55 crc kubenswrapper[4910]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 26 21:56:55 crc kubenswrapper[4910]: sleep 60 & wait Feb 26 21:56:55 crc kubenswrapper[4910]: continue Feb 26 21:56:55 crc kubenswrapper[4910]: fi Feb 26 21:56:55 crc kubenswrapper[4910]: Feb 26 21:56:55 crc kubenswrapper[4910]: # Append resolver entries for services Feb 26 21:56:55 crc kubenswrapper[4910]: rc=0 Feb 26 21:56:55 crc kubenswrapper[4910]: for svc in "${!svc_ips[@]}"; do Feb 26 21:56:55 crc kubenswrapper[4910]: for ip in ${svc_ips[${svc}]}; do Feb 26 21:56:55 crc kubenswrapper[4910]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 26 21:56:55 crc kubenswrapper[4910]: done Feb 26 21:56:55 crc kubenswrapper[4910]: done Feb 26 21:56:55 crc kubenswrapper[4910]: if [[ $rc -ne 0 ]]; then Feb 26 21:56:55 crc kubenswrapper[4910]: sleep 60 & wait Feb 26 21:56:55 crc kubenswrapper[4910]: continue Feb 26 21:56:55 crc kubenswrapper[4910]: fi Feb 26 21:56:55 crc kubenswrapper[4910]: Feb 26 21:56:55 crc kubenswrapper[4910]: Feb 26 21:56:55 crc kubenswrapper[4910]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 26 21:56:55 crc kubenswrapper[4910]: # Replace /etc/hosts with our modified version if needed Feb 26 21:56:55 crc kubenswrapper[4910]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 26 21:56:55 crc kubenswrapper[4910]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 26 21:56:55 crc kubenswrapper[4910]: fi Feb 26 21:56:55 crc kubenswrapper[4910]: sleep 60 & wait Feb 26 21:56:55 crc kubenswrapper[4910]: unset svc_ips Feb 26 21:56:55 crc kubenswrapper[4910]: done Feb 26 21:56:55 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8555,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-m5cf2_openshift-dns(5680be55-6cf7-4a72-a5b8-4b49efe4a020): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:55 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:55 crc kubenswrapper[4910]: E0226 21:56:55.341479 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-m5cf2" podUID="5680be55-6cf7-4a72-a5b8-4b49efe4a020" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.361328 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-795gt"] Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.361654 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.361956 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ht47v"] Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.363435 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6xpv4"] Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.363747 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.364540 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.364741 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.364680 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.364558 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.364676 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.369102 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.369205 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.369297 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.370031 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.370044 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.371126 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.370236 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.371694 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.388263 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.406351 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.413664 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.413731 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.413749 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.413772 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.413791 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.424033 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.436272 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.446383 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.458253 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.468844 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.478929 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.490015 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500387 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69251a00-4e6e-48f6-ae1b-d3001d22b419-mcd-auth-proxy-config\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500427 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-cnibin\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500449 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-os-release\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500472 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d78660ec-f27f-43be-add6-8fab38329537-multus-daemon-config\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500516 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-multus-cni-dir\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500538 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-cnibin\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500558 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-run-k8s-cni-cncf-io\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500577 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-hostroot\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500659 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-cni-binary-copy\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500762 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fhj2\" (UniqueName: \"kubernetes.io/projected/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-kube-api-access-5fhj2\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500817 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500933 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glfzm\" (UniqueName: \"kubernetes.io/projected/69251a00-4e6e-48f6-ae1b-d3001d22b419-kube-api-access-glfzm\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.500995 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-system-cni-dir\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501034 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-run-multus-certs\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501082 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-run-netns\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501128 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-etc-kubernetes\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501216 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501266 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkcjp\" (UniqueName: \"kubernetes.io/projected/d78660ec-f27f-43be-add6-8fab38329537-kube-api-access-jkcjp\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501298 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-system-cni-dir\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501328 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-os-release\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501364 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d78660ec-f27f-43be-add6-8fab38329537-cni-binary-copy\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501399 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-var-lib-cni-bin\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501444 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-var-lib-kubelet\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501497 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/69251a00-4e6e-48f6-ae1b-d3001d22b419-rootfs\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501601 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-multus-conf-dir\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501726 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/69251a00-4e6e-48f6-ae1b-d3001d22b419-proxy-tls\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501782 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-multus-socket-dir-parent\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.501833 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-var-lib-cni-multus\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.503639 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.516823 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.516952 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.516979 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.517006 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.517030 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.520039 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.533725 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.548216 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.559660 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.576224 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.590229 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.601119 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.602591 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkcjp\" (UniqueName: \"kubernetes.io/projected/d78660ec-f27f-43be-add6-8fab38329537-kube-api-access-jkcjp\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.602665 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-system-cni-dir\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.602712 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-os-release\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.602954 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d78660ec-f27f-43be-add6-8fab38329537-cni-binary-copy\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603019 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-var-lib-cni-bin\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603065 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-var-lib-kubelet\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603112 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/69251a00-4e6e-48f6-ae1b-d3001d22b419-rootfs\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603201 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-multus-conf-dir\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603260 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/69251a00-4e6e-48f6-ae1b-d3001d22b419-proxy-tls\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603306 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-multus-socket-dir-parent\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603364 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-var-lib-cni-multus\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603436 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69251a00-4e6e-48f6-ae1b-d3001d22b419-mcd-auth-proxy-config\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603497 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-cnibin\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603549 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-os-release\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603609 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d78660ec-f27f-43be-add6-8fab38329537-multus-daemon-config\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603713 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-multus-cni-dir\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603774 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-cnibin\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603838 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-run-k8s-cni-cncf-io\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603893 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-hostroot\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603940 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-multus-socket-dir-parent\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.603951 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-cni-binary-copy\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604003 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fhj2\" (UniqueName: \"kubernetes.io/projected/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-kube-api-access-5fhj2\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604069 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604130 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glfzm\" (UniqueName: \"kubernetes.io/projected/69251a00-4e6e-48f6-ae1b-d3001d22b419-kube-api-access-glfzm\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604153 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-multus-conf-dir\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604230 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-system-cni-dir\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604368 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-cnibin\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604452 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-run-multus-certs\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604502 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-var-lib-cni-multus\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604529 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-run-netns\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604626 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-etc-kubernetes\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604672 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-var-lib-kubelet\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.602772 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-system-cni-dir\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604693 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604787 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-run-netns\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604854 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-etc-kubernetes\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604868 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/69251a00-4e6e-48f6-ae1b-d3001d22b419-rootfs\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604497 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-os-release\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604947 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-hostroot\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604628 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69251a00-4e6e-48f6-ae1b-d3001d22b419-mcd-auth-proxy-config\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.602881 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-os-release\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.605102 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-run-multus-certs\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.604318 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-system-cni-dir\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.605113 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-run-k8s-cni-cncf-io\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.605279 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-multus-cni-dir\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.605251 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-host-var-lib-cni-bin\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.605290 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d78660ec-f27f-43be-add6-8fab38329537-cnibin\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.605858 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.606085 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d78660ec-f27f-43be-add6-8fab38329537-cni-binary-copy\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.606440 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d78660ec-f27f-43be-add6-8fab38329537-multus-daemon-config\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.606714 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-cni-binary-copy\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.606752 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.610666 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/69251a00-4e6e-48f6-ae1b-d3001d22b419-proxy-tls\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.614337 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.620302 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.620353 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.620370 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.620394 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.620412 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.628423 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fhj2\" (UniqueName: \"kubernetes.io/projected/a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c-kube-api-access-5fhj2\") pod \"multus-additional-cni-plugins-ht47v\" (UID: \"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\") " pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.628730 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkcjp\" (UniqueName: \"kubernetes.io/projected/d78660ec-f27f-43be-add6-8fab38329537-kube-api-access-jkcjp\") pod \"multus-795gt\" (UID: \"d78660ec-f27f-43be-add6-8fab38329537\") " pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.631149 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glfzm\" (UniqueName: \"kubernetes.io/projected/69251a00-4e6e-48f6-ae1b-d3001d22b419-kube-api-access-glfzm\") pod \"machine-config-daemon-6xpv4\" (UID: \"69251a00-4e6e-48f6-ae1b-d3001d22b419\") " pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.633314 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.645283 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.699991 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-795gt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.712504 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ht47v" Feb 26 21:56:55 crc kubenswrapper[4910]: W0226 21:56:55.715825 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd78660ec_f27f_43be_add6_8fab38329537.slice/crio-ea03201e93e78bbb1191427bcb534487a362b2747ec555f660b68f520e51ce37 WatchSource:0}: Error finding container ea03201e93e78bbb1191427bcb534487a362b2747ec555f660b68f520e51ce37: Status 404 returned error can't find the container with id ea03201e93e78bbb1191427bcb534487a362b2747ec555f660b68f520e51ce37 Feb 26 21:56:55 crc kubenswrapper[4910]: E0226 21:56:55.719518 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:55 crc kubenswrapper[4910]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 26 21:56:55 crc kubenswrapper[4910]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 26 21:56:55 crc kubenswrapper[4910]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkcjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-795gt_openshift-multus(d78660ec-f27f-43be-add6-8fab38329537): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:55 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:55 crc kubenswrapper[4910]: E0226 21:56:55.722680 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-795gt" podUID="d78660ec-f27f-43be-add6-8fab38329537" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.726462 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.728380 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.728433 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.728451 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.728478 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.728495 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: W0226 21:56:55.731797 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5b52f6d_a85a_4cbb_96a7_45c3b2ed492c.slice/crio-dcdcf87362128c170ac9200d4804c5fb0e9d5a1f681254148c24911675876229 WatchSource:0}: Error finding container dcdcf87362128c170ac9200d4804c5fb0e9d5a1f681254148c24911675876229: Status 404 returned error can't find the container with id dcdcf87362128c170ac9200d4804c5fb0e9d5a1f681254148c24911675876229 Feb 26 21:56:55 crc kubenswrapper[4910]: E0226 21:56:55.738132 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fhj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-ht47v_openshift-multus(a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:56:55 crc kubenswrapper[4910]: E0226 21:56:55.739442 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-ht47v" podUID="a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c" Feb 26 21:56:55 crc kubenswrapper[4910]: W0226 21:56:55.746610 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69251a00_4e6e_48f6_ae1b_d3001d22b419.slice/crio-71dedabe07c8468bdde889ad248325d109d741927677f24b00cd695829bdcba8 WatchSource:0}: Error finding container 71dedabe07c8468bdde889ad248325d109d741927677f24b00cd695829bdcba8: Status 404 returned error can't find the container with id 71dedabe07c8468bdde889ad248325d109d741927677f24b00cd695829bdcba8 Feb 26 21:56:55 crc kubenswrapper[4910]: E0226 21:56:55.750519 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:56:55 crc kubenswrapper[4910]: E0226 21:56:55.753834 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:56:55 crc kubenswrapper[4910]: E0226 21:56:55.755865 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.757091 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrq4q"] Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.758511 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.762195 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.762259 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.762333 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.762633 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.762692 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.763308 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.763743 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.779680 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.788187 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.798060 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.807332 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.818192 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.827715 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.830148 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.830179 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.830187 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.830200 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.830208 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.834222 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.843056 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.850893 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.859748 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.867365 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.883058 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.906652 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-log-socket\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.906710 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-openvswitch\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.906743 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-env-overrides\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.906783 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-systemd\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.906811 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-node-log\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.906839 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-netns\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.906884 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-systemd-units\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.906913 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-bin\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.906975 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-ovn\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907028 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-var-lib-openvswitch\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907067 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-ovn-kubernetes\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907096 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-script-lib\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907219 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-slash\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907292 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907330 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovn-node-metrics-cert\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907390 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-kubelet\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907431 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-etc-openvswitch\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907461 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-netd\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907498 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-config\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.907526 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txf8k\" (UniqueName: \"kubernetes.io/projected/41cb54c7-260b-42d4-8ae9-cf2a195721be-kube-api-access-txf8k\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.911642 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.919454 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.932473 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.932526 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.932541 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.932560 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.932575 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:55Z","lastTransitionTime":"2026-02-26T21:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.933025 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.942151 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.952061 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.960502 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.973438 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.982095 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.990842 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:55 crc kubenswrapper[4910]: I0226 21:56:55.998465 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.008674 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-config\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.008755 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txf8k\" (UniqueName: \"kubernetes.io/projected/41cb54c7-260b-42d4-8ae9-cf2a195721be-kube-api-access-txf8k\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.008856 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-log-socket\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.008909 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-openvswitch\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.008951 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-env-overrides\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.008999 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-systemd\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009028 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-log-socket\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009072 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-systemd\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009100 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-node-log\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009041 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-node-log\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009036 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-openvswitch\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009153 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-systemd-units\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009206 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-netns\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009209 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-systemd-units\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009228 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-bin\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009293 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-netns\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009296 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-ovn\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009321 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-ovn\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009336 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-bin\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009350 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-var-lib-openvswitch\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009404 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-ovn-kubernetes\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009368 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-var-lib-openvswitch\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009436 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-script-lib\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009456 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-ovn-kubernetes\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009517 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009553 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovn-node-metrics-cert\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009584 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-slash\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009616 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-kubelet\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009637 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009665 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-etc-openvswitch\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009704 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-netd\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009729 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-kubelet\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009748 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-etc-openvswitch\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009779 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-netd\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009696 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-slash\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.009971 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-env-overrides\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.010039 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-script-lib\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.010485 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-config\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.012678 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovn-node-metrics-cert\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.015990 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.023247 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txf8k\" (UniqueName: \"kubernetes.io/projected/41cb54c7-260b-42d4-8ae9-cf2a195721be-kube-api-access-txf8k\") pod \"ovnkube-node-xrq4q\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.041573 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.045472 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.045532 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.045555 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.045585 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.045606 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.078025 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.098731 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:56 crc kubenswrapper[4910]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 26 21:56:56 crc kubenswrapper[4910]: apiVersion: v1 Feb 26 21:56:56 crc kubenswrapper[4910]: clusters: Feb 26 21:56:56 crc kubenswrapper[4910]: - cluster: Feb 26 21:56:56 crc kubenswrapper[4910]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 26 21:56:56 crc kubenswrapper[4910]: server: https://api-int.crc.testing:6443 Feb 26 21:56:56 crc kubenswrapper[4910]: name: default-cluster Feb 26 21:56:56 crc kubenswrapper[4910]: contexts: Feb 26 21:56:56 crc kubenswrapper[4910]: - context: Feb 26 21:56:56 crc kubenswrapper[4910]: cluster: default-cluster Feb 26 21:56:56 crc kubenswrapper[4910]: namespace: default Feb 26 21:56:56 crc kubenswrapper[4910]: user: default-auth Feb 26 21:56:56 crc kubenswrapper[4910]: name: default-context Feb 26 21:56:56 crc kubenswrapper[4910]: current-context: default-context Feb 26 21:56:56 crc kubenswrapper[4910]: kind: Config Feb 26 21:56:56 crc kubenswrapper[4910]: preferences: {} Feb 26 21:56:56 crc kubenswrapper[4910]: users: Feb 26 21:56:56 crc kubenswrapper[4910]: - name: default-auth Feb 26 21:56:56 crc kubenswrapper[4910]: user: Feb 26 21:56:56 crc kubenswrapper[4910]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 21:56:56 crc kubenswrapper[4910]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 21:56:56 crc kubenswrapper[4910]: EOF Feb 26 21:56:56 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txf8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:56 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.100569 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.148145 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.148198 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.148208 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.148223 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.148235 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.250835 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.250909 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.250932 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.250964 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.250985 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.322846 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"ffc2ffb6487c21bc6ccd96c92051381ae04b9deb2812c7183ae4b33cb5c81e05"} Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.324286 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:56 crc kubenswrapper[4910]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 26 21:56:56 crc kubenswrapper[4910]: apiVersion: v1 Feb 26 21:56:56 crc kubenswrapper[4910]: clusters: Feb 26 21:56:56 crc kubenswrapper[4910]: - cluster: Feb 26 21:56:56 crc kubenswrapper[4910]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 26 21:56:56 crc kubenswrapper[4910]: server: https://api-int.crc.testing:6443 Feb 26 21:56:56 crc kubenswrapper[4910]: name: default-cluster Feb 26 21:56:56 crc kubenswrapper[4910]: contexts: Feb 26 21:56:56 crc kubenswrapper[4910]: - context: Feb 26 21:56:56 crc kubenswrapper[4910]: cluster: default-cluster Feb 26 21:56:56 crc kubenswrapper[4910]: namespace: default Feb 26 21:56:56 crc kubenswrapper[4910]: user: default-auth Feb 26 21:56:56 crc kubenswrapper[4910]: name: default-context Feb 26 21:56:56 crc kubenswrapper[4910]: current-context: default-context Feb 26 21:56:56 crc kubenswrapper[4910]: kind: Config Feb 26 21:56:56 crc kubenswrapper[4910]: preferences: {} Feb 26 21:56:56 crc kubenswrapper[4910]: users: Feb 26 21:56:56 crc kubenswrapper[4910]: - name: default-auth Feb 26 21:56:56 crc kubenswrapper[4910]: user: Feb 26 21:56:56 crc kubenswrapper[4910]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 21:56:56 crc kubenswrapper[4910]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 21:56:56 crc kubenswrapper[4910]: EOF Feb 26 21:56:56 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txf8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:56 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.324585 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-795gt" event={"ID":"d78660ec-f27f-43be-add6-8fab38329537","Type":"ContainerStarted","Data":"ea03201e93e78bbb1191427bcb534487a362b2747ec555f660b68f520e51ce37"} Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.325321 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.325821 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:56 crc kubenswrapper[4910]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 26 21:56:56 crc kubenswrapper[4910]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 26 21:56:56 crc kubenswrapper[4910]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkcjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-795gt_openshift-multus(d78660ec-f27f-43be-add6-8fab38329537): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:56 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.325912 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"71dedabe07c8468bdde889ad248325d109d741927677f24b00cd695829bdcba8"} Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.326951 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-795gt" podUID="d78660ec-f27f-43be-add6-8fab38329537" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.327074 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.327553 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" event={"ID":"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c","Type":"ContainerStarted","Data":"dcdcf87362128c170ac9200d4804c5fb0e9d5a1f681254148c24911675876229"} Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.328997 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.329067 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m5cf2" event={"ID":"5680be55-6cf7-4a72-a5b8-4b49efe4a020","Type":"ContainerStarted","Data":"bdc0e3974c9d881b21e811625ca0bb354a91628b60037d1a80ad3c361a2092c7"} Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.329059 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fhj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-ht47v_openshift-multus(a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.330193 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.330280 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-ht47v" podUID="a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.330780 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:56:56 crc kubenswrapper[4910]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 26 21:56:56 crc kubenswrapper[4910]: set -uo pipefail Feb 26 21:56:56 crc kubenswrapper[4910]: Feb 26 21:56:56 crc kubenswrapper[4910]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 26 21:56:56 crc kubenswrapper[4910]: Feb 26 21:56:56 crc kubenswrapper[4910]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 26 21:56:56 crc kubenswrapper[4910]: HOSTS_FILE="/etc/hosts" Feb 26 21:56:56 crc kubenswrapper[4910]: TEMP_FILE="/etc/hosts.tmp" Feb 26 21:56:56 crc kubenswrapper[4910]: Feb 26 21:56:56 crc kubenswrapper[4910]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 26 21:56:56 crc kubenswrapper[4910]: Feb 26 21:56:56 crc kubenswrapper[4910]: # Make a temporary file with the old hosts file's attributes. Feb 26 21:56:56 crc kubenswrapper[4910]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 26 21:56:56 crc kubenswrapper[4910]: echo "Failed to preserve hosts file. Exiting." Feb 26 21:56:56 crc kubenswrapper[4910]: exit 1 Feb 26 21:56:56 crc kubenswrapper[4910]: fi Feb 26 21:56:56 crc kubenswrapper[4910]: Feb 26 21:56:56 crc kubenswrapper[4910]: while true; do Feb 26 21:56:56 crc kubenswrapper[4910]: declare -A svc_ips Feb 26 21:56:56 crc kubenswrapper[4910]: for svc in "${services[@]}"; do Feb 26 21:56:56 crc kubenswrapper[4910]: # Fetch service IP from cluster dns if present. We make several tries Feb 26 21:56:56 crc kubenswrapper[4910]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 26 21:56:56 crc kubenswrapper[4910]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 26 21:56:56 crc kubenswrapper[4910]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 26 21:56:56 crc kubenswrapper[4910]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 21:56:56 crc kubenswrapper[4910]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 21:56:56 crc kubenswrapper[4910]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 21:56:56 crc kubenswrapper[4910]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 26 21:56:56 crc kubenswrapper[4910]: for i in ${!cmds[*]} Feb 26 21:56:56 crc kubenswrapper[4910]: do Feb 26 21:56:56 crc kubenswrapper[4910]: ips=($(eval "${cmds[i]}")) Feb 26 21:56:56 crc kubenswrapper[4910]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 26 21:56:56 crc kubenswrapper[4910]: svc_ips["${svc}"]="${ips[@]}" Feb 26 21:56:56 crc kubenswrapper[4910]: break Feb 26 21:56:56 crc kubenswrapper[4910]: fi Feb 26 21:56:56 crc kubenswrapper[4910]: done Feb 26 21:56:56 crc kubenswrapper[4910]: done Feb 26 21:56:56 crc kubenswrapper[4910]: Feb 26 21:56:56 crc kubenswrapper[4910]: # Update /etc/hosts only if we get valid service IPs Feb 26 21:56:56 crc kubenswrapper[4910]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 26 21:56:56 crc kubenswrapper[4910]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 26 21:56:56 crc kubenswrapper[4910]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 26 21:56:56 crc kubenswrapper[4910]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 26 21:56:56 crc kubenswrapper[4910]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 26 21:56:56 crc kubenswrapper[4910]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 26 21:56:56 crc kubenswrapper[4910]: sleep 60 & wait Feb 26 21:56:56 crc kubenswrapper[4910]: continue Feb 26 21:56:56 crc kubenswrapper[4910]: fi Feb 26 21:56:56 crc kubenswrapper[4910]: Feb 26 21:56:56 crc kubenswrapper[4910]: # Append resolver entries for services Feb 26 21:56:56 crc kubenswrapper[4910]: rc=0 Feb 26 21:56:56 crc kubenswrapper[4910]: for svc in "${!svc_ips[@]}"; do Feb 26 21:56:56 crc kubenswrapper[4910]: for ip in ${svc_ips[${svc}]}; do Feb 26 21:56:56 crc kubenswrapper[4910]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 26 21:56:56 crc kubenswrapper[4910]: done Feb 26 21:56:56 crc kubenswrapper[4910]: done Feb 26 21:56:56 crc kubenswrapper[4910]: if [[ $rc -ne 0 ]]; then Feb 26 21:56:56 crc kubenswrapper[4910]: sleep 60 & wait Feb 26 21:56:56 crc kubenswrapper[4910]: continue Feb 26 21:56:56 crc kubenswrapper[4910]: fi Feb 26 21:56:56 crc kubenswrapper[4910]: Feb 26 21:56:56 crc kubenswrapper[4910]: Feb 26 21:56:56 crc kubenswrapper[4910]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 26 21:56:56 crc kubenswrapper[4910]: # Replace /etc/hosts with our modified version if needed Feb 26 21:56:56 crc kubenswrapper[4910]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 26 21:56:56 crc kubenswrapper[4910]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 26 21:56:56 crc kubenswrapper[4910]: fi Feb 26 21:56:56 crc kubenswrapper[4910]: sleep 60 & wait Feb 26 21:56:56 crc kubenswrapper[4910]: unset svc_ips Feb 26 21:56:56 crc kubenswrapper[4910]: done Feb 26 21:56:56 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8555,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-m5cf2_openshift-dns(5680be55-6cf7-4a72-a5b8-4b49efe4a020): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:56:56 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.332520 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-m5cf2" podUID="5680be55-6cf7-4a72-a5b8-4b49efe4a020" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.336194 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.349413 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.353505 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.353554 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.353571 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.353594 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.353665 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.362881 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.391370 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.404362 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.420415 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.433957 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.450764 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.456138 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.456217 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.456237 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.456262 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.456280 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.466491 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.482876 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.493285 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.511395 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.525447 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.547286 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.559273 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.559328 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.559346 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.559370 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.559388 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.561268 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.575184 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.592381 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.605003 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.621867 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.640678 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.652645 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.662709 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.662764 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.662785 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.662820 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.662843 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.665047 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.680712 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.694265 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.765437 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.765525 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.765544 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.765570 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.765588 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.868603 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.868651 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.868666 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.868683 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.868698 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.901471 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.901527 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.901527 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.901700 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.901817 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:56:56 crc kubenswrapper[4910]: E0226 21:56:56.901957 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.971876 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.971960 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.971984 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.972016 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:56 crc kubenswrapper[4910]: I0226 21:56:56.972046 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:56Z","lastTransitionTime":"2026-02-26T21:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.075200 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.075266 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.075284 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.075319 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.075338 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:57Z","lastTransitionTime":"2026-02-26T21:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.178581 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.178676 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.178695 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.178720 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.178737 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:57Z","lastTransitionTime":"2026-02-26T21:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.282292 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.282349 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.282366 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.282388 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.282408 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:57Z","lastTransitionTime":"2026-02-26T21:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.385435 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.385492 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.385509 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.385533 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.385549 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:57Z","lastTransitionTime":"2026-02-26T21:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.488349 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.488442 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.488463 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.488484 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.488499 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:57Z","lastTransitionTime":"2026-02-26T21:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.592083 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.592194 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.592223 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.592256 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.592280 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:57Z","lastTransitionTime":"2026-02-26T21:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.695497 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.695597 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.695615 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.695639 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.695657 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:57Z","lastTransitionTime":"2026-02-26T21:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.799310 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.799359 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.799376 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.799399 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.799418 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:57Z","lastTransitionTime":"2026-02-26T21:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.903134 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.903280 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.903309 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.903337 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:57 crc kubenswrapper[4910]: I0226 21:56:57.903357 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:57Z","lastTransitionTime":"2026-02-26T21:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.006906 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.007011 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.007025 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.007050 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.007065 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.111266 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.111360 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.111378 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.111403 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.111420 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.214551 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.214609 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.214631 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.214660 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.214681 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.318075 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.318128 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.318148 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.318206 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.318228 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.421761 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.421861 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.421873 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.421902 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.421919 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.524684 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.524740 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.524757 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.524785 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.524803 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.537920 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.538106 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.538144 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.538278 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.538341 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:06.538322292 +0000 UTC m=+111.617812833 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.538631 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:57:06.538582139 +0000 UTC m=+111.618072720 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.538757 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.538887 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:06.538845657 +0000 UTC m=+111.618336348 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.627843 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.627911 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.627930 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.627958 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.627976 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.639594 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.639852 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.639879 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.640295 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.640469 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.640671 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:06.640646364 +0000 UTC m=+111.720136945 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.640030 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.640979 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.641152 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.641389 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:06.641370095 +0000 UTC m=+111.720860666 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.730995 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.731075 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.731087 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.731110 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.731122 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.833844 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.833957 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.833976 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.834002 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.834021 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.900903 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.900929 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.901076 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.901331 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.901646 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:56:58 crc kubenswrapper[4910]: E0226 21:56:58.901832 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.937716 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.937785 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.937806 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.937834 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:58 crc kubenswrapper[4910]: I0226 21:56:58.937852 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:58Z","lastTransitionTime":"2026-02-26T21:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.041258 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.041327 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.041345 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.041369 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.041388 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.143816 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.144212 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.144229 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.144248 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.144260 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.247770 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.247843 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.247865 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.247895 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.247918 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.350243 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.350307 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.350324 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.350348 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.350365 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.453974 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.454137 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.454206 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.454242 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.454266 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.556862 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.556944 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.556970 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.557001 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.557023 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.659967 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.660035 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.660056 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.660084 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.660105 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.763453 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.763780 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.763962 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.764102 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.764273 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.867737 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.867801 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.867885 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.868097 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.868158 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.971954 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.972020 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.972037 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.972062 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:56:59 crc kubenswrapper[4910]: I0226 21:56:59.972081 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:56:59Z","lastTransitionTime":"2026-02-26T21:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.074660 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.074734 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.074752 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.074778 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.074797 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:00Z","lastTransitionTime":"2026-02-26T21:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.178140 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.178252 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.178274 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.178300 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.178318 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:00Z","lastTransitionTime":"2026-02-26T21:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.281582 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.281643 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.281661 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.281684 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.281702 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:00Z","lastTransitionTime":"2026-02-26T21:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.384775 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.384836 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.384852 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.384877 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.384894 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:00Z","lastTransitionTime":"2026-02-26T21:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.487493 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.487561 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.487588 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.487618 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.487636 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:00Z","lastTransitionTime":"2026-02-26T21:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.591214 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.591272 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.591288 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.591312 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.591334 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:00Z","lastTransitionTime":"2026-02-26T21:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.694286 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.694353 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.694379 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.694407 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.694424 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:00Z","lastTransitionTime":"2026-02-26T21:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.797858 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.797929 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.797953 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.797981 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.798003 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:00Z","lastTransitionTime":"2026-02-26T21:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.900766 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.900860 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:00 crc kubenswrapper[4910]: E0226 21:57:00.900924 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.901094 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:00 crc kubenswrapper[4910]: E0226 21:57:00.901070 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:00 crc kubenswrapper[4910]: E0226 21:57:00.901444 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.901795 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.901865 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.901890 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.901917 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:00 crc kubenswrapper[4910]: I0226 21:57:00.901935 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:00Z","lastTransitionTime":"2026-02-26T21:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.004894 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.004948 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.004968 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.004991 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.005008 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.107906 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.107959 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.107977 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.108036 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.108063 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.212572 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.212641 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.212667 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.212701 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.212728 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.316231 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.316315 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.316337 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.316371 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.316398 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.419972 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.420125 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.420204 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.420281 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.420322 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.524113 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.524230 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.524253 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.524282 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.524301 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.627609 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.627690 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.627705 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.627723 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.627740 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.648086 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-zbq6c"] Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.648686 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.651610 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.651708 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.652132 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.653051 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.662416 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.678020 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.688903 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.712941 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.730393 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.730896 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.730929 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.730944 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.730969 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.730987 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.746562 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.759936 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.774855 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02ab3935-85f7-493a-b88e-205f5018e5d6-host\") pod \"node-ca-zbq6c\" (UID: \"02ab3935-85f7-493a-b88e-205f5018e5d6\") " pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.775050 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zbqd\" (UniqueName: \"kubernetes.io/projected/02ab3935-85f7-493a-b88e-205f5018e5d6-kube-api-access-2zbqd\") pod \"node-ca-zbq6c\" (UID: \"02ab3935-85f7-493a-b88e-205f5018e5d6\") " pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.775129 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/02ab3935-85f7-493a-b88e-205f5018e5d6-serviceca\") pod \"node-ca-zbq6c\" (UID: \"02ab3935-85f7-493a-b88e-205f5018e5d6\") " pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.780136 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.793898 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.809644 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.820641 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.833973 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.834044 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.834070 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.834101 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.834124 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.839765 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.851664 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.876349 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/02ab3935-85f7-493a-b88e-205f5018e5d6-serviceca\") pod \"node-ca-zbq6c\" (UID: \"02ab3935-85f7-493a-b88e-205f5018e5d6\") " pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.876411 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02ab3935-85f7-493a-b88e-205f5018e5d6-host\") pod \"node-ca-zbq6c\" (UID: \"02ab3935-85f7-493a-b88e-205f5018e5d6\") " pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.876514 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zbqd\" (UniqueName: \"kubernetes.io/projected/02ab3935-85f7-493a-b88e-205f5018e5d6-kube-api-access-2zbqd\") pod \"node-ca-zbq6c\" (UID: \"02ab3935-85f7-493a-b88e-205f5018e5d6\") " pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.876594 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02ab3935-85f7-493a-b88e-205f5018e5d6-host\") pod \"node-ca-zbq6c\" (UID: \"02ab3935-85f7-493a-b88e-205f5018e5d6\") " pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.877336 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/02ab3935-85f7-493a-b88e-205f5018e5d6-serviceca\") pod \"node-ca-zbq6c\" (UID: \"02ab3935-85f7-493a-b88e-205f5018e5d6\") " pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: E0226 21:57:01.904497 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:01 crc kubenswrapper[4910]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 21:57:01 crc kubenswrapper[4910]: if [[ -f "/env/_master" ]]; then Feb 26 21:57:01 crc kubenswrapper[4910]: set -o allexport Feb 26 21:57:01 crc kubenswrapper[4910]: source "/env/_master" Feb 26 21:57:01 crc kubenswrapper[4910]: set +o allexport Feb 26 21:57:01 crc kubenswrapper[4910]: fi Feb 26 21:57:01 crc kubenswrapper[4910]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 26 21:57:01 crc kubenswrapper[4910]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 26 21:57:01 crc kubenswrapper[4910]: ho_enable="--enable-hybrid-overlay" Feb 26 21:57:01 crc kubenswrapper[4910]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 26 21:57:01 crc kubenswrapper[4910]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 26 21:57:01 crc kubenswrapper[4910]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 26 21:57:01 crc kubenswrapper[4910]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 21:57:01 crc kubenswrapper[4910]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 26 21:57:01 crc kubenswrapper[4910]: --webhook-host=127.0.0.1 \ Feb 26 21:57:01 crc kubenswrapper[4910]: --webhook-port=9743 \ Feb 26 21:57:01 crc kubenswrapper[4910]: ${ho_enable} \ Feb 26 21:57:01 crc kubenswrapper[4910]: --enable-interconnect \ Feb 26 21:57:01 crc kubenswrapper[4910]: --disable-approver \ Feb 26 21:57:01 crc kubenswrapper[4910]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 26 21:57:01 crc kubenswrapper[4910]: --wait-for-kubernetes-api=200s \ Feb 26 21:57:01 crc kubenswrapper[4910]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 26 21:57:01 crc kubenswrapper[4910]: --loglevel="${LOGLEVEL}" Feb 26 21:57:01 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:01 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.904922 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zbqd\" (UniqueName: \"kubernetes.io/projected/02ab3935-85f7-493a-b88e-205f5018e5d6-kube-api-access-2zbqd\") pod \"node-ca-zbq6c\" (UID: \"02ab3935-85f7-493a-b88e-205f5018e5d6\") " pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: E0226 21:57:01.908348 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:01 crc kubenswrapper[4910]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 21:57:01 crc kubenswrapper[4910]: if [[ -f "/env/_master" ]]; then Feb 26 21:57:01 crc kubenswrapper[4910]: set -o allexport Feb 26 21:57:01 crc kubenswrapper[4910]: source "/env/_master" Feb 26 21:57:01 crc kubenswrapper[4910]: set +o allexport Feb 26 21:57:01 crc kubenswrapper[4910]: fi Feb 26 21:57:01 crc kubenswrapper[4910]: Feb 26 21:57:01 crc kubenswrapper[4910]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 26 21:57:01 crc kubenswrapper[4910]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 21:57:01 crc kubenswrapper[4910]: --disable-webhook \ Feb 26 21:57:01 crc kubenswrapper[4910]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 26 21:57:01 crc kubenswrapper[4910]: --loglevel="${LOGLEVEL}" Feb 26 21:57:01 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:01 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:01 crc kubenswrapper[4910]: E0226 21:57:01.910254 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.937552 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.937611 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.937634 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.937658 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.937675 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:01Z","lastTransitionTime":"2026-02-26T21:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:01 crc kubenswrapper[4910]: I0226 21:57:01.962187 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zbq6c" Feb 26 21:57:01 crc kubenswrapper[4910]: W0226 21:57:01.981139 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02ab3935_85f7_493a_b88e_205f5018e5d6.slice/crio-b4f855b854d969147c23197a6f89529fdc0d74c677740b906a0946106269dbbc WatchSource:0}: Error finding container b4f855b854d969147c23197a6f89529fdc0d74c677740b906a0946106269dbbc: Status 404 returned error can't find the container with id b4f855b854d969147c23197a6f89529fdc0d74c677740b906a0946106269dbbc Feb 26 21:57:01 crc kubenswrapper[4910]: E0226 21:57:01.984348 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:01 crc kubenswrapper[4910]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 26 21:57:01 crc kubenswrapper[4910]: while [ true ]; Feb 26 21:57:01 crc kubenswrapper[4910]: do Feb 26 21:57:01 crc kubenswrapper[4910]: for f in $(ls /tmp/serviceca); do Feb 26 21:57:01 crc kubenswrapper[4910]: echo $f Feb 26 21:57:01 crc kubenswrapper[4910]: ca_file_path="/tmp/serviceca/${f}" Feb 26 21:57:01 crc kubenswrapper[4910]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 26 21:57:01 crc kubenswrapper[4910]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 26 21:57:01 crc kubenswrapper[4910]: if [ -e "${reg_dir_path}" ]; then Feb 26 21:57:01 crc kubenswrapper[4910]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 26 21:57:01 crc kubenswrapper[4910]: else Feb 26 21:57:01 crc kubenswrapper[4910]: mkdir $reg_dir_path Feb 26 21:57:01 crc kubenswrapper[4910]: cp $ca_file_path $reg_dir_path/ca.crt Feb 26 21:57:01 crc kubenswrapper[4910]: fi Feb 26 21:57:01 crc kubenswrapper[4910]: done Feb 26 21:57:01 crc kubenswrapper[4910]: for d in $(ls /etc/docker/certs.d); do Feb 26 21:57:01 crc kubenswrapper[4910]: echo $d Feb 26 21:57:01 crc kubenswrapper[4910]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 26 21:57:01 crc kubenswrapper[4910]: reg_conf_path="/tmp/serviceca/${dp}" Feb 26 21:57:01 crc kubenswrapper[4910]: if [ ! -e "${reg_conf_path}" ]; then Feb 26 21:57:01 crc kubenswrapper[4910]: rm -rf /etc/docker/certs.d/$d Feb 26 21:57:01 crc kubenswrapper[4910]: fi Feb 26 21:57:01 crc kubenswrapper[4910]: done Feb 26 21:57:01 crc kubenswrapper[4910]: sleep 60 & wait ${!} Feb 26 21:57:01 crc kubenswrapper[4910]: done Feb 26 21:57:01 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zbqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-zbq6c_openshift-image-registry(02ab3935-85f7-493a-b88e-205f5018e5d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:01 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:01 crc kubenswrapper[4910]: E0226 21:57:01.985577 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-zbq6c" podUID="02ab3935-85f7-493a-b88e-205f5018e5d6" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.041219 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.041729 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.041914 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.042100 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.042344 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.145559 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.145617 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.145634 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.145659 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.145676 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.249193 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.249263 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.249293 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.249327 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.249349 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.348423 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zbq6c" event={"ID":"02ab3935-85f7-493a-b88e-205f5018e5d6","Type":"ContainerStarted","Data":"b4f855b854d969147c23197a6f89529fdc0d74c677740b906a0946106269dbbc"} Feb 26 21:57:02 crc kubenswrapper[4910]: E0226 21:57:02.350625 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:02 crc kubenswrapper[4910]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 26 21:57:02 crc kubenswrapper[4910]: while [ true ]; Feb 26 21:57:02 crc kubenswrapper[4910]: do Feb 26 21:57:02 crc kubenswrapper[4910]: for f in $(ls /tmp/serviceca); do Feb 26 21:57:02 crc kubenswrapper[4910]: echo $f Feb 26 21:57:02 crc kubenswrapper[4910]: ca_file_path="/tmp/serviceca/${f}" Feb 26 21:57:02 crc kubenswrapper[4910]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 26 21:57:02 crc kubenswrapper[4910]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 26 21:57:02 crc kubenswrapper[4910]: if [ -e "${reg_dir_path}" ]; then Feb 26 21:57:02 crc kubenswrapper[4910]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 26 21:57:02 crc kubenswrapper[4910]: else Feb 26 21:57:02 crc kubenswrapper[4910]: mkdir $reg_dir_path Feb 26 21:57:02 crc kubenswrapper[4910]: cp $ca_file_path $reg_dir_path/ca.crt Feb 26 21:57:02 crc kubenswrapper[4910]: fi Feb 26 21:57:02 crc kubenswrapper[4910]: done Feb 26 21:57:02 crc kubenswrapper[4910]: for d in $(ls /etc/docker/certs.d); do Feb 26 21:57:02 crc kubenswrapper[4910]: echo $d Feb 26 21:57:02 crc kubenswrapper[4910]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 26 21:57:02 crc kubenswrapper[4910]: reg_conf_path="/tmp/serviceca/${dp}" Feb 26 21:57:02 crc kubenswrapper[4910]: if [ ! -e "${reg_conf_path}" ]; then Feb 26 21:57:02 crc kubenswrapper[4910]: rm -rf /etc/docker/certs.d/$d Feb 26 21:57:02 crc kubenswrapper[4910]: fi Feb 26 21:57:02 crc kubenswrapper[4910]: done Feb 26 21:57:02 crc kubenswrapper[4910]: sleep 60 & wait ${!} Feb 26 21:57:02 crc kubenswrapper[4910]: done Feb 26 21:57:02 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zbqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-zbq6c_openshift-image-registry(02ab3935-85f7-493a-b88e-205f5018e5d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:02 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:02 crc kubenswrapper[4910]: E0226 21:57:02.351796 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-zbq6c" podUID="02ab3935-85f7-493a-b88e-205f5018e5d6" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.352045 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.352189 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.352209 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.352235 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.352253 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.369364 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.392182 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.407578 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.425385 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.442089 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.454930 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.454983 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.454999 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.455025 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.455041 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.458539 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.476206 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.493882 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.506080 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.524867 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.533197 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.547826 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.558216 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.558247 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.558259 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.558286 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.558298 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.563324 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.662718 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.662785 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.662803 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.662827 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.662844 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.765361 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.765436 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.765460 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.765490 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.765510 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.868633 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.868672 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.868683 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.868700 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.868712 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.901073 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.901096 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:02 crc kubenswrapper[4910]: E0226 21:57:02.901525 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.901102 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:02 crc kubenswrapper[4910]: E0226 21:57:02.901624 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:02 crc kubenswrapper[4910]: E0226 21:57:02.901709 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.971380 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.971414 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.971441 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.971458 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:02 crc kubenswrapper[4910]: I0226 21:57:02.971468 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:02Z","lastTransitionTime":"2026-02-26T21:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.074737 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.074803 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.074821 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.074845 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.074863 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:03Z","lastTransitionTime":"2026-02-26T21:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.178078 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.178130 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.178148 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.178212 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.178239 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:03Z","lastTransitionTime":"2026-02-26T21:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.281520 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.281614 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.281632 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.281657 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.281673 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:03Z","lastTransitionTime":"2026-02-26T21:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.384593 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.384630 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.384641 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.384687 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.384699 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:03Z","lastTransitionTime":"2026-02-26T21:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.487912 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.487977 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.487994 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.488021 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.488038 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:03Z","lastTransitionTime":"2026-02-26T21:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.591013 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.591082 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.591107 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.591134 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.591155 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:03Z","lastTransitionTime":"2026-02-26T21:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.693797 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.693847 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.693867 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.693890 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.693907 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:03Z","lastTransitionTime":"2026-02-26T21:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.796913 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.796978 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.796998 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.797029 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.797052 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:03Z","lastTransitionTime":"2026-02-26T21:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.900444 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.900643 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.900683 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.900716 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.900740 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:03Z","lastTransitionTime":"2026-02-26T21:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:03 crc kubenswrapper[4910]: I0226 21:57:03.901352 4910 scope.go:117] "RemoveContainer" containerID="549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a" Feb 26 21:57:03 crc kubenswrapper[4910]: E0226 21:57:03.901620 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.003987 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.004115 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.004140 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.004207 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.004234 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.076781 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.076851 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.076871 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.076898 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.076915 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.092724 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.098522 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.098594 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.098617 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.098647 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.098670 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.115102 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.120845 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.120901 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.120919 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.120944 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.120964 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.137030 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.142518 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.142597 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.142620 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.142650 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.142673 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.159260 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.164116 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.164205 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.164229 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.164254 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.164272 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.181374 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.181538 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.183520 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.183586 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.183612 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.183643 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.183667 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.286724 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.286804 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.286864 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.286896 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.286920 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.390343 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.390386 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.390397 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.390415 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.390427 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.493586 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.493641 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.493656 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.493679 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.493696 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.596969 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.597034 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.597056 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.597082 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.597102 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.699496 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.699552 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.699575 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.699600 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.699621 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.803528 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.803584 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.803599 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.803620 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.803637 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.901109 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.901314 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.901335 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.901659 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.901741 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.901877 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.903474 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:57:04 crc kubenswrapper[4910]: E0226 21:57:04.904851 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.906835 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.906885 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.906905 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.906932 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:04 crc kubenswrapper[4910]: I0226 21:57:04.906958 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:04Z","lastTransitionTime":"2026-02-26T21:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.010478 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.010529 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.010542 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.010562 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.010575 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.113802 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.113842 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.113855 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.113873 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.113885 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.217461 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.217538 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.217564 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.217597 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.217621 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.321152 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.321240 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.321262 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.321291 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.321312 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.424521 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.424834 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.424988 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.425130 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.425346 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.528114 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.528495 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.528633 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.528785 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.528919 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.632037 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.632105 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.632128 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.632209 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.632241 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.735452 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.735512 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.735541 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.735582 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.735605 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.838743 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.838817 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.838840 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.838872 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.838894 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: E0226 21:57:05.904341 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:05 crc kubenswrapper[4910]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 26 21:57:05 crc kubenswrapper[4910]: set -o allexport Feb 26 21:57:05 crc kubenswrapper[4910]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 26 21:57:05 crc kubenswrapper[4910]: source /etc/kubernetes/apiserver-url.env Feb 26 21:57:05 crc kubenswrapper[4910]: else Feb 26 21:57:05 crc kubenswrapper[4910]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 26 21:57:05 crc kubenswrapper[4910]: exit 1 Feb 26 21:57:05 crc kubenswrapper[4910]: fi Feb 26 21:57:05 crc kubenswrapper[4910]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 26 21:57:05 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:05 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:05 crc kubenswrapper[4910]: E0226 21:57:05.905780 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.918030 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.932904 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.941462 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.941541 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.941563 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.941588 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.941606 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:05Z","lastTransitionTime":"2026-02-26T21:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.949953 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.978084 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:05 crc kubenswrapper[4910]: I0226 21:57:05.991531 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.005578 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.021719 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.044641 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.044650 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.044698 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.044909 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.044953 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.044983 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.058576 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.071965 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.079523 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.091192 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.103011 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.148383 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.148447 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.148465 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.148490 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.148508 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.251283 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.251504 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.251644 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.251820 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.251947 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.355465 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.355514 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.355526 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.355546 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.355560 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.458984 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.459046 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.459065 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.459094 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.459112 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.562277 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.562358 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.562383 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.562418 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.562441 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.629252 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.629464 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.629516 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:57:22.6294748 +0000 UTC m=+127.708965381 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.629613 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.629619 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.629688 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:22.629665245 +0000 UTC m=+127.709155816 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.629727 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.629833 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:22.629810959 +0000 UTC m=+127.709301500 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.665427 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.665498 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.665510 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.665528 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.665540 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.731271 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.731356 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.731526 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.731571 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.731570 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.731594 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.731614 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.731635 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.731678 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:22.731653058 +0000 UTC m=+127.811143639 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.731742 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:22.73172101 +0000 UTC m=+127.811211581 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.767859 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.767941 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.767965 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.767994 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.768013 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.871405 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.871469 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.871488 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.871514 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.871532 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.901287 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.901369 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.901318 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.901487 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.901827 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.902676 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.905054 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fhj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-ht47v_openshift-multus(a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:57:06 crc kubenswrapper[4910]: E0226 21:57:06.906321 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-ht47v" podUID="a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.974372 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.974444 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.974461 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.974488 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:06 crc kubenswrapper[4910]: I0226 21:57:06.974509 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:06Z","lastTransitionTime":"2026-02-26T21:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.077333 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.077388 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.077405 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.077430 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.077447 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:07Z","lastTransitionTime":"2026-02-26T21:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.180025 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.180090 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.180107 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.180131 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.180148 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:07Z","lastTransitionTime":"2026-02-26T21:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.283361 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.283469 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.283482 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.283506 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.283522 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:07Z","lastTransitionTime":"2026-02-26T21:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.386635 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.386695 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.386709 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.386740 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.386756 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:07Z","lastTransitionTime":"2026-02-26T21:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.464512 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx"] Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.465404 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.468582 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.468636 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.485527 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.489943 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.490012 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.490030 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.490056 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.490073 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:07Z","lastTransitionTime":"2026-02-26T21:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.507453 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.518966 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.533412 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.550030 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.561456 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.577326 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.592304 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.593866 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.593928 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.593946 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.593971 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.593989 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:07Z","lastTransitionTime":"2026-02-26T21:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.605930 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.626906 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.639940 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.642535 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b8l5\" (UniqueName: \"kubernetes.io/projected/50dce6a7-297f-49b9-8994-bc73b6fb33a2-kube-api-access-6b8l5\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.642578 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50dce6a7-297f-49b9-8994-bc73b6fb33a2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.642616 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50dce6a7-297f-49b9-8994-bc73b6fb33a2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.642649 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50dce6a7-297f-49b9-8994-bc73b6fb33a2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.652502 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.668695 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.683008 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.697998 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.698064 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.698082 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.698107 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.698124 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:07Z","lastTransitionTime":"2026-02-26T21:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.743629 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50dce6a7-297f-49b9-8994-bc73b6fb33a2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.743705 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50dce6a7-297f-49b9-8994-bc73b6fb33a2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.743742 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50dce6a7-297f-49b9-8994-bc73b6fb33a2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.743838 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b8l5\" (UniqueName: \"kubernetes.io/projected/50dce6a7-297f-49b9-8994-bc73b6fb33a2-kube-api-access-6b8l5\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.744710 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50dce6a7-297f-49b9-8994-bc73b6fb33a2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.744843 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50dce6a7-297f-49b9-8994-bc73b6fb33a2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.749721 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50dce6a7-297f-49b9-8994-bc73b6fb33a2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.773354 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b8l5\" (UniqueName: \"kubernetes.io/projected/50dce6a7-297f-49b9-8994-bc73b6fb33a2-kube-api-access-6b8l5\") pod \"ovnkube-control-plane-749d76644c-mnrdx\" (UID: \"50dce6a7-297f-49b9-8994-bc73b6fb33a2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.786047 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.801571 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.801623 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.801644 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.801669 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.801687 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:07Z","lastTransitionTime":"2026-02-26T21:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:07 crc kubenswrapper[4910]: W0226 21:57:07.804629 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50dce6a7_297f_49b9_8994_bc73b6fb33a2.slice/crio-b928bfba17136f64ab791e121f611889b746b555ab498f354be582e058de6b81 WatchSource:0}: Error finding container b928bfba17136f64ab791e121f611889b746b555ab498f354be582e058de6b81: Status 404 returned error can't find the container with id b928bfba17136f64ab791e121f611889b746b555ab498f354be582e058de6b81 Feb 26 21:57:07 crc kubenswrapper[4910]: E0226 21:57:07.809012 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:07 crc kubenswrapper[4910]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 26 21:57:07 crc kubenswrapper[4910]: set -euo pipefail Feb 26 21:57:07 crc kubenswrapper[4910]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 26 21:57:07 crc kubenswrapper[4910]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 26 21:57:07 crc kubenswrapper[4910]: # As the secret mount is optional we must wait for the files to be present. Feb 26 21:57:07 crc kubenswrapper[4910]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 26 21:57:07 crc kubenswrapper[4910]: TS=$(date +%s) Feb 26 21:57:07 crc kubenswrapper[4910]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 26 21:57:07 crc kubenswrapper[4910]: HAS_LOGGED_INFO=0 Feb 26 21:57:07 crc kubenswrapper[4910]: Feb 26 21:57:07 crc kubenswrapper[4910]: log_missing_certs(){ Feb 26 21:57:07 crc kubenswrapper[4910]: CUR_TS=$(date +%s) Feb 26 21:57:07 crc kubenswrapper[4910]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 26 21:57:07 crc kubenswrapper[4910]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 26 21:57:07 crc kubenswrapper[4910]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 26 21:57:07 crc kubenswrapper[4910]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 26 21:57:07 crc kubenswrapper[4910]: HAS_LOGGED_INFO=1 Feb 26 21:57:07 crc kubenswrapper[4910]: fi Feb 26 21:57:07 crc kubenswrapper[4910]: } Feb 26 21:57:07 crc kubenswrapper[4910]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 26 21:57:07 crc kubenswrapper[4910]: log_missing_certs Feb 26 21:57:07 crc kubenswrapper[4910]: sleep 5 Feb 26 21:57:07 crc kubenswrapper[4910]: done Feb 26 21:57:07 crc kubenswrapper[4910]: Feb 26 21:57:07 crc kubenswrapper[4910]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 26 21:57:07 crc kubenswrapper[4910]: exec /usr/bin/kube-rbac-proxy \ Feb 26 21:57:07 crc kubenswrapper[4910]: --logtostderr \ Feb 26 21:57:07 crc kubenswrapper[4910]: --secure-listen-address=:9108 \ Feb 26 21:57:07 crc kubenswrapper[4910]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 26 21:57:07 crc kubenswrapper[4910]: --upstream=http://127.0.0.1:29108/ \ Feb 26 21:57:07 crc kubenswrapper[4910]: --tls-private-key-file=${TLS_PK} \ Feb 26 21:57:07 crc kubenswrapper[4910]: --tls-cert-file=${TLS_CERT} Feb 26 21:57:07 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b8l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-mnrdx_openshift-ovn-kubernetes(50dce6a7-297f-49b9-8994-bc73b6fb33a2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:07 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:07 crc kubenswrapper[4910]: E0226 21:57:07.812765 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:07 crc kubenswrapper[4910]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 21:57:07 crc kubenswrapper[4910]: if [[ -f "/env/_master" ]]; then Feb 26 21:57:07 crc kubenswrapper[4910]: set -o allexport Feb 26 21:57:07 crc kubenswrapper[4910]: source "/env/_master" Feb 26 21:57:07 crc kubenswrapper[4910]: set +o allexport Feb 26 21:57:07 crc kubenswrapper[4910]: fi Feb 26 21:57:07 crc kubenswrapper[4910]: Feb 26 21:57:07 crc kubenswrapper[4910]: ovn_v4_join_subnet_opt= Feb 26 21:57:07 crc kubenswrapper[4910]: if [[ "" != "" ]]; then Feb 26 21:57:07 crc kubenswrapper[4910]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 26 21:57:07 crc kubenswrapper[4910]: fi Feb 26 21:57:07 crc kubenswrapper[4910]: ovn_v6_join_subnet_opt= Feb 26 21:57:07 crc kubenswrapper[4910]: if [[ "" != "" ]]; then Feb 26 21:57:07 crc kubenswrapper[4910]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 26 21:57:07 crc kubenswrapper[4910]: fi Feb 26 21:57:07 crc kubenswrapper[4910]: Feb 26 21:57:07 crc kubenswrapper[4910]: ovn_v4_transit_switch_subnet_opt= Feb 26 21:57:07 crc kubenswrapper[4910]: if [[ "" != "" ]]; then Feb 26 21:57:07 crc kubenswrapper[4910]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 26 21:57:07 crc kubenswrapper[4910]: fi Feb 26 21:57:07 crc kubenswrapper[4910]: ovn_v6_transit_switch_subnet_opt= Feb 26 21:57:07 crc kubenswrapper[4910]: if [[ "" != "" ]]; then Feb 26 21:57:07 crc kubenswrapper[4910]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 26 21:57:07 crc kubenswrapper[4910]: fi Feb 26 21:57:07 crc kubenswrapper[4910]: Feb 26 21:57:07 crc kubenswrapper[4910]: dns_name_resolver_enabled_flag= Feb 26 21:57:07 crc kubenswrapper[4910]: if [[ "false" == "true" ]]; then Feb 26 21:57:07 crc kubenswrapper[4910]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 26 21:57:07 crc kubenswrapper[4910]: fi Feb 26 21:57:07 crc kubenswrapper[4910]: Feb 26 21:57:07 crc kubenswrapper[4910]: persistent_ips_enabled_flag= Feb 26 21:57:07 crc kubenswrapper[4910]: if [[ "true" == "true" ]]; then Feb 26 21:57:07 crc kubenswrapper[4910]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 26 21:57:07 crc kubenswrapper[4910]: fi Feb 26 21:57:07 crc kubenswrapper[4910]: Feb 26 21:57:07 crc kubenswrapper[4910]: # This is needed so that converting clusters from GA to TP Feb 26 21:57:07 crc kubenswrapper[4910]: # will rollout control plane pods as well Feb 26 21:57:07 crc kubenswrapper[4910]: network_segmentation_enabled_flag= Feb 26 21:57:07 crc kubenswrapper[4910]: multi_network_enabled_flag= Feb 26 21:57:07 crc kubenswrapper[4910]: if [[ "true" == "true" ]]; then Feb 26 21:57:07 crc kubenswrapper[4910]: multi_network_enabled_flag="--enable-multi-network" Feb 26 21:57:07 crc kubenswrapper[4910]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 26 21:57:07 crc kubenswrapper[4910]: fi Feb 26 21:57:07 crc kubenswrapper[4910]: Feb 26 21:57:07 crc kubenswrapper[4910]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 26 21:57:07 crc kubenswrapper[4910]: exec /usr/bin/ovnkube \ Feb 26 21:57:07 crc kubenswrapper[4910]: --enable-interconnect \ Feb 26 21:57:07 crc kubenswrapper[4910]: --init-cluster-manager "${K8S_NODE}" \ Feb 26 21:57:07 crc kubenswrapper[4910]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 26 21:57:07 crc kubenswrapper[4910]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 26 21:57:07 crc kubenswrapper[4910]: --metrics-bind-address "127.0.0.1:29108" \ Feb 26 21:57:07 crc kubenswrapper[4910]: --metrics-enable-pprof \ Feb 26 21:57:07 crc kubenswrapper[4910]: --metrics-enable-config-duration \ Feb 26 21:57:07 crc kubenswrapper[4910]: ${ovn_v4_join_subnet_opt} \ Feb 26 21:57:07 crc kubenswrapper[4910]: ${ovn_v6_join_subnet_opt} \ Feb 26 21:57:07 crc kubenswrapper[4910]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 26 21:57:07 crc kubenswrapper[4910]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 26 21:57:07 crc kubenswrapper[4910]: ${dns_name_resolver_enabled_flag} \ Feb 26 21:57:07 crc kubenswrapper[4910]: ${persistent_ips_enabled_flag} \ Feb 26 21:57:07 crc kubenswrapper[4910]: ${multi_network_enabled_flag} \ Feb 26 21:57:07 crc kubenswrapper[4910]: ${network_segmentation_enabled_flag} Feb 26 21:57:07 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b8l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-mnrdx_openshift-ovn-kubernetes(50dce6a7-297f-49b9-8994-bc73b6fb33a2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:07 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:07 crc kubenswrapper[4910]: E0226 21:57:07.814012 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" podUID="50dce6a7-297f-49b9-8994-bc73b6fb33a2" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.904845 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.904907 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.904921 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.904945 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:07 crc kubenswrapper[4910]: I0226 21:57:07.904963 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:07Z","lastTransitionTime":"2026-02-26T21:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.008228 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.008287 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.008304 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.008329 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.008346 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.112317 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.112376 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.112395 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.112421 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.112439 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.188382 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mhdkf"] Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.189017 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.189114 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.206822 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.215438 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.215689 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.215839 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.215999 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.216142 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.233875 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.252322 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.268353 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.279129 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.294497 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.307121 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.319823 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.319896 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.319918 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.319944 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.319963 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.322848 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.334633 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.352268 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.352351 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfz5\" (UniqueName: \"kubernetes.io/projected/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-kube-api-access-7qfz5\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.355143 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.367488 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.368209 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" event={"ID":"50dce6a7-297f-49b9-8994-bc73b6fb33a2","Type":"ContainerStarted","Data":"b928bfba17136f64ab791e121f611889b746b555ab498f354be582e058de6b81"} Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.370264 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:08 crc kubenswrapper[4910]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 26 21:57:08 crc kubenswrapper[4910]: set -euo pipefail Feb 26 21:57:08 crc kubenswrapper[4910]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 26 21:57:08 crc kubenswrapper[4910]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 26 21:57:08 crc kubenswrapper[4910]: # As the secret mount is optional we must wait for the files to be present. Feb 26 21:57:08 crc kubenswrapper[4910]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 26 21:57:08 crc kubenswrapper[4910]: TS=$(date +%s) Feb 26 21:57:08 crc kubenswrapper[4910]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 26 21:57:08 crc kubenswrapper[4910]: HAS_LOGGED_INFO=0 Feb 26 21:57:08 crc kubenswrapper[4910]: Feb 26 21:57:08 crc kubenswrapper[4910]: log_missing_certs(){ Feb 26 21:57:08 crc kubenswrapper[4910]: CUR_TS=$(date +%s) Feb 26 21:57:08 crc kubenswrapper[4910]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 26 21:57:08 crc kubenswrapper[4910]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 26 21:57:08 crc kubenswrapper[4910]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 26 21:57:08 crc kubenswrapper[4910]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 26 21:57:08 crc kubenswrapper[4910]: HAS_LOGGED_INFO=1 Feb 26 21:57:08 crc kubenswrapper[4910]: fi Feb 26 21:57:08 crc kubenswrapper[4910]: } Feb 26 21:57:08 crc kubenswrapper[4910]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 26 21:57:08 crc kubenswrapper[4910]: log_missing_certs Feb 26 21:57:08 crc kubenswrapper[4910]: sleep 5 Feb 26 21:57:08 crc kubenswrapper[4910]: done Feb 26 21:57:08 crc kubenswrapper[4910]: Feb 26 21:57:08 crc kubenswrapper[4910]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 26 21:57:08 crc kubenswrapper[4910]: exec /usr/bin/kube-rbac-proxy \ Feb 26 21:57:08 crc kubenswrapper[4910]: --logtostderr \ Feb 26 21:57:08 crc kubenswrapper[4910]: --secure-listen-address=:9108 \ Feb 26 21:57:08 crc kubenswrapper[4910]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 26 21:57:08 crc kubenswrapper[4910]: --upstream=http://127.0.0.1:29108/ \ Feb 26 21:57:08 crc kubenswrapper[4910]: --tls-private-key-file=${TLS_PK} \ Feb 26 21:57:08 crc kubenswrapper[4910]: --tls-cert-file=${TLS_CERT} Feb 26 21:57:08 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b8l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-mnrdx_openshift-ovn-kubernetes(50dce6a7-297f-49b9-8994-bc73b6fb33a2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:08 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.373485 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:08 crc kubenswrapper[4910]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 21:57:08 crc kubenswrapper[4910]: if [[ -f "/env/_master" ]]; then Feb 26 21:57:08 crc kubenswrapper[4910]: set -o allexport Feb 26 21:57:08 crc kubenswrapper[4910]: source "/env/_master" Feb 26 21:57:08 crc kubenswrapper[4910]: set +o allexport Feb 26 21:57:08 crc kubenswrapper[4910]: fi Feb 26 21:57:08 crc kubenswrapper[4910]: Feb 26 21:57:08 crc kubenswrapper[4910]: ovn_v4_join_subnet_opt= Feb 26 21:57:08 crc kubenswrapper[4910]: if [[ "" != "" ]]; then Feb 26 21:57:08 crc kubenswrapper[4910]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 26 21:57:08 crc kubenswrapper[4910]: fi Feb 26 21:57:08 crc kubenswrapper[4910]: ovn_v6_join_subnet_opt= Feb 26 21:57:08 crc kubenswrapper[4910]: if [[ "" != "" ]]; then Feb 26 21:57:08 crc kubenswrapper[4910]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 26 21:57:08 crc kubenswrapper[4910]: fi Feb 26 21:57:08 crc kubenswrapper[4910]: Feb 26 21:57:08 crc kubenswrapper[4910]: ovn_v4_transit_switch_subnet_opt= Feb 26 21:57:08 crc kubenswrapper[4910]: if [[ "" != "" ]]; then Feb 26 21:57:08 crc kubenswrapper[4910]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 26 21:57:08 crc kubenswrapper[4910]: fi Feb 26 21:57:08 crc kubenswrapper[4910]: ovn_v6_transit_switch_subnet_opt= Feb 26 21:57:08 crc kubenswrapper[4910]: if [[ "" != "" ]]; then Feb 26 21:57:08 crc kubenswrapper[4910]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 26 21:57:08 crc kubenswrapper[4910]: fi Feb 26 21:57:08 crc kubenswrapper[4910]: Feb 26 21:57:08 crc kubenswrapper[4910]: dns_name_resolver_enabled_flag= Feb 26 21:57:08 crc kubenswrapper[4910]: if [[ "false" == "true" ]]; then Feb 26 21:57:08 crc kubenswrapper[4910]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 26 21:57:08 crc kubenswrapper[4910]: fi Feb 26 21:57:08 crc kubenswrapper[4910]: Feb 26 21:57:08 crc kubenswrapper[4910]: persistent_ips_enabled_flag= Feb 26 21:57:08 crc kubenswrapper[4910]: if [[ "true" == "true" ]]; then Feb 26 21:57:08 crc kubenswrapper[4910]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 26 21:57:08 crc kubenswrapper[4910]: fi Feb 26 21:57:08 crc kubenswrapper[4910]: Feb 26 21:57:08 crc kubenswrapper[4910]: # This is needed so that converting clusters from GA to TP Feb 26 21:57:08 crc kubenswrapper[4910]: # will rollout control plane pods as well Feb 26 21:57:08 crc kubenswrapper[4910]: network_segmentation_enabled_flag= Feb 26 21:57:08 crc kubenswrapper[4910]: multi_network_enabled_flag= Feb 26 21:57:08 crc kubenswrapper[4910]: if [[ "true" == "true" ]]; then Feb 26 21:57:08 crc kubenswrapper[4910]: multi_network_enabled_flag="--enable-multi-network" Feb 26 21:57:08 crc kubenswrapper[4910]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 26 21:57:08 crc kubenswrapper[4910]: fi Feb 26 21:57:08 crc kubenswrapper[4910]: Feb 26 21:57:08 crc kubenswrapper[4910]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 26 21:57:08 crc kubenswrapper[4910]: exec /usr/bin/ovnkube \ Feb 26 21:57:08 crc kubenswrapper[4910]: --enable-interconnect \ Feb 26 21:57:08 crc kubenswrapper[4910]: --init-cluster-manager "${K8S_NODE}" \ Feb 26 21:57:08 crc kubenswrapper[4910]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 26 21:57:08 crc kubenswrapper[4910]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 26 21:57:08 crc kubenswrapper[4910]: --metrics-bind-address "127.0.0.1:29108" \ Feb 26 21:57:08 crc kubenswrapper[4910]: --metrics-enable-pprof \ Feb 26 21:57:08 crc kubenswrapper[4910]: --metrics-enable-config-duration \ Feb 26 21:57:08 crc kubenswrapper[4910]: ${ovn_v4_join_subnet_opt} \ Feb 26 21:57:08 crc kubenswrapper[4910]: ${ovn_v6_join_subnet_opt} \ Feb 26 21:57:08 crc kubenswrapper[4910]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 26 21:57:08 crc kubenswrapper[4910]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 26 21:57:08 crc kubenswrapper[4910]: ${dns_name_resolver_enabled_flag} \ Feb 26 21:57:08 crc kubenswrapper[4910]: ${persistent_ips_enabled_flag} \ Feb 26 21:57:08 crc kubenswrapper[4910]: ${multi_network_enabled_flag} \ Feb 26 21:57:08 crc kubenswrapper[4910]: ${network_segmentation_enabled_flag} Feb 26 21:57:08 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b8l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-mnrdx_openshift-ovn-kubernetes(50dce6a7-297f-49b9-8994-bc73b6fb33a2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:08 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.374709 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" podUID="50dce6a7-297f-49b9-8994-bc73b6fb33a2" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.380333 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.390486 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.403641 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.415406 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.422439 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.422502 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.422527 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.422556 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.422580 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.432141 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.449613 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.452886 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.452930 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qfz5\" (UniqueName: \"kubernetes.io/projected/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-kube-api-access-7qfz5\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.453219 4910 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.453351 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs podName:9bd0ab20-beab-4d8b-90d0-ef5bd1c10526 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:08.953318399 +0000 UTC m=+114.032808980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs") pod "network-metrics-daemon-mhdkf" (UID: "9bd0ab20-beab-4d8b-90d0-ef5bd1c10526") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.464670 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.477697 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qfz5\" (UniqueName: \"kubernetes.io/projected/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-kube-api-access-7qfz5\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.482594 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.494083 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.504136 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.512902 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.525803 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.525887 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.525913 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.525944 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.525978 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.527658 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.537948 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.547289 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.565251 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.582074 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.602176 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.620152 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.631622 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.631667 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.631679 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.631698 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.631711 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.645083 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.734522 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.734591 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.734614 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.734645 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.734668 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.836807 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.836877 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.836890 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.836907 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.836916 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.901287 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.901373 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.901393 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.901768 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.901911 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.901998 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.903375 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:08 crc kubenswrapper[4910]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 26 21:57:08 crc kubenswrapper[4910]: apiVersion: v1 Feb 26 21:57:08 crc kubenswrapper[4910]: clusters: Feb 26 21:57:08 crc kubenswrapper[4910]: - cluster: Feb 26 21:57:08 crc kubenswrapper[4910]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 26 21:57:08 crc kubenswrapper[4910]: server: https://api-int.crc.testing:6443 Feb 26 21:57:08 crc kubenswrapper[4910]: name: default-cluster Feb 26 21:57:08 crc kubenswrapper[4910]: contexts: Feb 26 21:57:08 crc kubenswrapper[4910]: - context: Feb 26 21:57:08 crc kubenswrapper[4910]: cluster: default-cluster Feb 26 21:57:08 crc kubenswrapper[4910]: namespace: default Feb 26 21:57:08 crc kubenswrapper[4910]: user: default-auth Feb 26 21:57:08 crc kubenswrapper[4910]: name: default-context Feb 26 21:57:08 crc kubenswrapper[4910]: current-context: default-context Feb 26 21:57:08 crc kubenswrapper[4910]: kind: Config Feb 26 21:57:08 crc kubenswrapper[4910]: preferences: {} Feb 26 21:57:08 crc kubenswrapper[4910]: users: Feb 26 21:57:08 crc kubenswrapper[4910]: - name: default-auth Feb 26 21:57:08 crc kubenswrapper[4910]: user: Feb 26 21:57:08 crc kubenswrapper[4910]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 21:57:08 crc kubenswrapper[4910]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 21:57:08 crc kubenswrapper[4910]: EOF Feb 26 21:57:08 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txf8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:08 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.904505 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.938819 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.938869 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.938878 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.938892 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.938901 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:08Z","lastTransitionTime":"2026-02-26T21:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:08 crc kubenswrapper[4910]: I0226 21:57:08.959511 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.959656 4910 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:08 crc kubenswrapper[4910]: E0226 21:57:08.959714 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs podName:9bd0ab20-beab-4d8b-90d0-ef5bd1c10526 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:09.959697145 +0000 UTC m=+115.039187696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs") pod "network-metrics-daemon-mhdkf" (UID: "9bd0ab20-beab-4d8b-90d0-ef5bd1c10526") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.041358 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.041414 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.041429 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.041451 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.041467 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.143570 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.143611 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.143623 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.143638 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.143650 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.246311 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.246388 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.246444 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.246479 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.246503 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.349151 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.349254 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.349272 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.349296 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.349313 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.452211 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.452303 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.452393 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.452428 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.452453 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.555281 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.555340 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.555360 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.555385 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.555407 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.658461 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.658502 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.658512 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.658528 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.658538 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.761576 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.761616 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.761630 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.761673 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.761688 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.864518 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.864547 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.864557 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.864571 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.864580 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.901491 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:09 crc kubenswrapper[4910]: E0226 21:57:09.901845 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:09 crc kubenswrapper[4910]: E0226 21:57:09.903546 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:09 crc kubenswrapper[4910]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 26 21:57:09 crc kubenswrapper[4910]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 26 21:57:09 crc kubenswrapper[4910]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkcjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-795gt_openshift-multus(d78660ec-f27f-43be-add6-8fab38329537): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:09 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:09 crc kubenswrapper[4910]: E0226 21:57:09.904689 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-795gt" podUID="d78660ec-f27f-43be-add6-8fab38329537" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.967435 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.967496 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.967506 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.967520 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.967529 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:09Z","lastTransitionTime":"2026-02-26T21:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:09 crc kubenswrapper[4910]: I0226 21:57:09.969066 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:09 crc kubenswrapper[4910]: E0226 21:57:09.969261 4910 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:09 crc kubenswrapper[4910]: E0226 21:57:09.969313 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs podName:9bd0ab20-beab-4d8b-90d0-ef5bd1c10526 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:11.969296678 +0000 UTC m=+117.048787219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs") pod "network-metrics-daemon-mhdkf" (UID: "9bd0ab20-beab-4d8b-90d0-ef5bd1c10526") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.070075 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.070116 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.070124 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.070138 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.070148 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.172820 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.172902 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.172915 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.172935 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.172948 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.275585 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.275663 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.275682 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.275714 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.275740 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.377746 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.377929 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.377960 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.377991 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.378014 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.482741 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.482799 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.482809 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.482826 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.482837 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.585728 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.585767 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.585778 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.585790 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.585798 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.688276 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.688350 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.688383 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.688412 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.688432 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.791039 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.791114 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.791132 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.791187 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.791208 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.894764 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.894858 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.894875 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.894901 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.894919 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.901145 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.901146 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.901274 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:10 crc kubenswrapper[4910]: E0226 21:57:10.901336 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:10 crc kubenswrapper[4910]: E0226 21:57:10.901494 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:10 crc kubenswrapper[4910]: E0226 21:57:10.901920 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:10 crc kubenswrapper[4910]: E0226 21:57:10.904021 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:57:10 crc kubenswrapper[4910]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 26 21:57:10 crc kubenswrapper[4910]: set -uo pipefail Feb 26 21:57:10 crc kubenswrapper[4910]: Feb 26 21:57:10 crc kubenswrapper[4910]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 26 21:57:10 crc kubenswrapper[4910]: Feb 26 21:57:10 crc kubenswrapper[4910]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 26 21:57:10 crc kubenswrapper[4910]: HOSTS_FILE="/etc/hosts" Feb 26 21:57:10 crc kubenswrapper[4910]: TEMP_FILE="/etc/hosts.tmp" Feb 26 21:57:10 crc kubenswrapper[4910]: Feb 26 21:57:10 crc kubenswrapper[4910]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 26 21:57:10 crc kubenswrapper[4910]: Feb 26 21:57:10 crc kubenswrapper[4910]: # Make a temporary file with the old hosts file's attributes. Feb 26 21:57:10 crc kubenswrapper[4910]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 26 21:57:10 crc kubenswrapper[4910]: echo "Failed to preserve hosts file. Exiting." Feb 26 21:57:10 crc kubenswrapper[4910]: exit 1 Feb 26 21:57:10 crc kubenswrapper[4910]: fi Feb 26 21:57:10 crc kubenswrapper[4910]: Feb 26 21:57:10 crc kubenswrapper[4910]: while true; do Feb 26 21:57:10 crc kubenswrapper[4910]: declare -A svc_ips Feb 26 21:57:10 crc kubenswrapper[4910]: for svc in "${services[@]}"; do Feb 26 21:57:10 crc kubenswrapper[4910]: # Fetch service IP from cluster dns if present. We make several tries Feb 26 21:57:10 crc kubenswrapper[4910]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 26 21:57:10 crc kubenswrapper[4910]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 26 21:57:10 crc kubenswrapper[4910]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 26 21:57:10 crc kubenswrapper[4910]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 21:57:10 crc kubenswrapper[4910]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 21:57:10 crc kubenswrapper[4910]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 21:57:10 crc kubenswrapper[4910]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 26 21:57:10 crc kubenswrapper[4910]: for i in ${!cmds[*]} Feb 26 21:57:10 crc kubenswrapper[4910]: do Feb 26 21:57:10 crc kubenswrapper[4910]: ips=($(eval "${cmds[i]}")) Feb 26 21:57:10 crc kubenswrapper[4910]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 26 21:57:10 crc kubenswrapper[4910]: svc_ips["${svc}"]="${ips[@]}" Feb 26 21:57:10 crc kubenswrapper[4910]: break Feb 26 21:57:10 crc kubenswrapper[4910]: fi Feb 26 21:57:10 crc kubenswrapper[4910]: done Feb 26 21:57:10 crc kubenswrapper[4910]: done Feb 26 21:57:10 crc kubenswrapper[4910]: Feb 26 21:57:10 crc kubenswrapper[4910]: # Update /etc/hosts only if we get valid service IPs Feb 26 21:57:10 crc kubenswrapper[4910]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 26 21:57:10 crc kubenswrapper[4910]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 26 21:57:10 crc kubenswrapper[4910]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 26 21:57:10 crc kubenswrapper[4910]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 26 21:57:10 crc kubenswrapper[4910]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 26 21:57:10 crc kubenswrapper[4910]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 26 21:57:10 crc kubenswrapper[4910]: sleep 60 & wait Feb 26 21:57:10 crc kubenswrapper[4910]: continue Feb 26 21:57:10 crc kubenswrapper[4910]: fi Feb 26 21:57:10 crc kubenswrapper[4910]: Feb 26 21:57:10 crc kubenswrapper[4910]: # Append resolver entries for services Feb 26 21:57:10 crc kubenswrapper[4910]: rc=0 Feb 26 21:57:10 crc kubenswrapper[4910]: for svc in "${!svc_ips[@]}"; do Feb 26 21:57:10 crc kubenswrapper[4910]: for ip in ${svc_ips[${svc}]}; do Feb 26 21:57:10 crc kubenswrapper[4910]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 26 21:57:10 crc kubenswrapper[4910]: done Feb 26 21:57:10 crc kubenswrapper[4910]: done Feb 26 21:57:10 crc kubenswrapper[4910]: if [[ $rc -ne 0 ]]; then Feb 26 21:57:10 crc kubenswrapper[4910]: sleep 60 & wait Feb 26 21:57:10 crc kubenswrapper[4910]: continue Feb 26 21:57:10 crc kubenswrapper[4910]: fi Feb 26 21:57:10 crc kubenswrapper[4910]: Feb 26 21:57:10 crc kubenswrapper[4910]: Feb 26 21:57:10 crc kubenswrapper[4910]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 26 21:57:10 crc kubenswrapper[4910]: # Replace /etc/hosts with our modified version if needed Feb 26 21:57:10 crc kubenswrapper[4910]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 26 21:57:10 crc kubenswrapper[4910]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 26 21:57:10 crc kubenswrapper[4910]: fi Feb 26 21:57:10 crc kubenswrapper[4910]: sleep 60 & wait Feb 26 21:57:10 crc kubenswrapper[4910]: unset svc_ips Feb 26 21:57:10 crc kubenswrapper[4910]: done Feb 26 21:57:10 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8555,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-m5cf2_openshift-dns(5680be55-6cf7-4a72-a5b8-4b49efe4a020): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 21:57:10 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:57:10 crc kubenswrapper[4910]: E0226 21:57:10.905302 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-m5cf2" podUID="5680be55-6cf7-4a72-a5b8-4b49efe4a020" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.997933 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.997988 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.998006 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.998029 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:10 crc kubenswrapper[4910]: I0226 21:57:10.998049 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:10Z","lastTransitionTime":"2026-02-26T21:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.100872 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.100912 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.100922 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.100939 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.100949 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:11Z","lastTransitionTime":"2026-02-26T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.204243 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.204297 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.204310 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.204330 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.204344 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:11Z","lastTransitionTime":"2026-02-26T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.306791 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.306852 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.306869 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.306894 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.306910 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:11Z","lastTransitionTime":"2026-02-26T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.409934 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.410005 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.410028 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.410058 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.410081 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:11Z","lastTransitionTime":"2026-02-26T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.513020 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.513068 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.513080 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.513100 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.513113 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:11Z","lastTransitionTime":"2026-02-26T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.615749 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.615826 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.615843 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.615868 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.615883 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:11Z","lastTransitionTime":"2026-02-26T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.718679 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.718733 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.718749 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.718769 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.718783 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:11Z","lastTransitionTime":"2026-02-26T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.820582 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.820643 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.820660 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.820686 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.820728 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:11Z","lastTransitionTime":"2026-02-26T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.900938 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:11 crc kubenswrapper[4910]: E0226 21:57:11.901426 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:11 crc kubenswrapper[4910]: E0226 21:57:11.903358 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:57:11 crc kubenswrapper[4910]: E0226 21:57:11.905706 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glfzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 21:57:11 crc kubenswrapper[4910]: E0226 21:57:11.906884 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.923152 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.923196 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.923205 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.923220 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.923230 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:11Z","lastTransitionTime":"2026-02-26T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:11 crc kubenswrapper[4910]: I0226 21:57:11.993720 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:11 crc kubenswrapper[4910]: E0226 21:57:11.994090 4910 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:11 crc kubenswrapper[4910]: E0226 21:57:11.994257 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs podName:9bd0ab20-beab-4d8b-90d0-ef5bd1c10526 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:15.994231314 +0000 UTC m=+121.073721895 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs") pod "network-metrics-daemon-mhdkf" (UID: "9bd0ab20-beab-4d8b-90d0-ef5bd1c10526") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.026779 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.026842 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.026861 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.026889 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.026907 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.130455 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.130511 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.130521 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.130542 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.130556 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.233689 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.233728 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.233741 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.233758 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.233769 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.336203 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.336262 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.336281 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.336304 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.336321 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.439604 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.439681 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.439701 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.439725 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.439744 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.542731 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.542793 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.542818 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.542861 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.542891 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.646506 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.646577 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.646595 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.646622 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.646646 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.750336 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.750391 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.750400 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.750415 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.750423 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.853753 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.853795 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.853824 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.853845 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.853855 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.900726 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.900822 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:12 crc kubenswrapper[4910]: E0226 21:57:12.900967 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.901002 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:12 crc kubenswrapper[4910]: E0226 21:57:12.901129 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:12 crc kubenswrapper[4910]: E0226 21:57:12.901284 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.955925 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.955979 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.955990 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.956008 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:12 crc kubenswrapper[4910]: I0226 21:57:12.956020 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:12Z","lastTransitionTime":"2026-02-26T21:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.058760 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.058817 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.058837 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.058866 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.058883 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.161833 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.161887 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.161901 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.161923 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.161965 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.265211 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.265269 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.265285 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.265307 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.265324 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.368824 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.368893 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.368930 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.368960 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.368981 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.471762 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.471814 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.471826 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.471846 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.471860 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.574504 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.574612 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.574639 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.574670 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.574692 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.677807 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.677865 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.677881 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.677905 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.677922 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.781458 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.781499 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.781515 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.781538 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.781554 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.798337 4910 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.884558 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.884612 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.884629 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.884651 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.884668 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.901223 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:13 crc kubenswrapper[4910]: E0226 21:57:13.901396 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.987370 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.987419 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.987435 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.987459 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:13 crc kubenswrapper[4910]: I0226 21:57:13.987475 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:13Z","lastTransitionTime":"2026-02-26T21:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.090097 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.090202 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.090226 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.090253 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.090273 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.192572 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.192656 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.192677 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.192702 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.192721 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.295755 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.295838 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.295867 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.295902 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.295924 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.326245 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.326310 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.326333 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.326358 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.326376 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: E0226 21:57:14.345215 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.351255 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.351293 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.351306 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.351328 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.351342 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: E0226 21:57:14.368211 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.374014 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.374205 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.374251 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.374288 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.374313 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: E0226 21:57:14.392091 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.397309 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.397356 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.397374 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.397401 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.397421 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: E0226 21:57:14.409478 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.415058 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.415132 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.415199 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.415230 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.415251 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: E0226 21:57:14.431539 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:14 crc kubenswrapper[4910]: E0226 21:57:14.431727 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.433672 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.433795 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.433820 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.433878 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.433895 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.536110 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.536220 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.536246 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.536276 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.536301 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.638812 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.638876 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.638895 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.638923 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.638947 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.742349 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.742423 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.742441 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.742467 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.742484 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.845643 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.845711 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.845733 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.845762 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.845787 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.901316 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.901410 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:14 crc kubenswrapper[4910]: E0226 21:57:14.901508 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:14 crc kubenswrapper[4910]: E0226 21:57:14.901626 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.901751 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:14 crc kubenswrapper[4910]: E0226 21:57:14.901912 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.947995 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.948074 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.948093 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.948121 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:14 crc kubenswrapper[4910]: I0226 21:57:14.948143 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:14Z","lastTransitionTime":"2026-02-26T21:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.051675 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.051749 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.051771 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.051801 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.051822 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:15Z","lastTransitionTime":"2026-02-26T21:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.155388 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.155437 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.155456 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.155481 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.155499 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:15Z","lastTransitionTime":"2026-02-26T21:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.257566 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.257626 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.257648 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.257675 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.257696 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:15Z","lastTransitionTime":"2026-02-26T21:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.360298 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.360424 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.360453 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.360481 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.360504 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:15Z","lastTransitionTime":"2026-02-26T21:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.463910 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.463974 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.463992 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.464016 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.464033 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:15Z","lastTransitionTime":"2026-02-26T21:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.566601 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.566701 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.566720 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.566746 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.566765 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:15Z","lastTransitionTime":"2026-02-26T21:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.669446 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.669549 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.669565 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.669591 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.669611 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:15Z","lastTransitionTime":"2026-02-26T21:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.772866 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.772954 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.772978 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.773007 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.773025 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:15Z","lastTransitionTime":"2026-02-26T21:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:15 crc kubenswrapper[4910]: E0226 21:57:15.874205 4910 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.900748 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:15 crc kubenswrapper[4910]: E0226 21:57:15.900939 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.922153 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.956843 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.974617 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:15 crc kubenswrapper[4910]: I0226 21:57:15.990448 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.006983 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.024849 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: E0226 21:57:16.034879 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.042240 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.042738 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:16 crc kubenswrapper[4910]: E0226 21:57:16.043028 4910 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:16 crc kubenswrapper[4910]: E0226 21:57:16.043148 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs podName:9bd0ab20-beab-4d8b-90d0-ef5bd1c10526 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:24.043112887 +0000 UTC m=+129.122603468 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs") pod "network-metrics-daemon-mhdkf" (UID: "9bd0ab20-beab-4d8b-90d0-ef5bd1c10526") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.055979 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.069878 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.087673 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.097347 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.109088 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.122133 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.134435 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.151341 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.394306 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zbq6c" event={"ID":"02ab3935-85f7-493a-b88e-205f5018e5d6","Type":"ContainerStarted","Data":"816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6"} Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.409049 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.436128 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.453827 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.467050 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.484450 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.500540 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.526370 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.539234 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.553239 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.564550 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.583222 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.596685 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.615156 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.629568 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.643287 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.901094 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.901346 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:16 crc kubenswrapper[4910]: I0226 21:57:16.901421 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:16 crc kubenswrapper[4910]: E0226 21:57:16.901631 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:16 crc kubenswrapper[4910]: E0226 21:57:16.901944 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:16 crc kubenswrapper[4910]: E0226 21:57:16.902054 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.398776 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514"} Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.400265 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f"} Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.415654 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.435362 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.450508 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.482686 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.499602 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.517267 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.533022 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.544927 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.557856 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.567129 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.576918 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.593544 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.603936 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.620012 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:17Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.640304 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:17Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:17 crc kubenswrapper[4910]: I0226 21:57:17.900612 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:17 crc kubenswrapper[4910]: E0226 21:57:17.900797 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:18 crc kubenswrapper[4910]: I0226 21:57:18.900632 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:18 crc kubenswrapper[4910]: I0226 21:57:18.900631 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:18 crc kubenswrapper[4910]: I0226 21:57:18.900757 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:18 crc kubenswrapper[4910]: E0226 21:57:18.900925 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:18 crc kubenswrapper[4910]: E0226 21:57:18.901112 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:18 crc kubenswrapper[4910]: E0226 21:57:18.901631 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:18 crc kubenswrapper[4910]: I0226 21:57:18.902004 4910 scope.go:117] "RemoveContainer" containerID="549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.409686 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.411924 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f"} Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.412235 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.433786 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.464772 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.490422 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.510025 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.527803 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.547243 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.562614 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.578626 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.593628 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.616941 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.633599 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.651211 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.671022 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.698050 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.709530 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:19Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:19 crc kubenswrapper[4910]: I0226 21:57:19.900744 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:19 crc kubenswrapper[4910]: E0226 21:57:19.901198 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.418248 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659"} Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.421604 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50" exitCode=0 Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.421704 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50"} Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.424024 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" event={"ID":"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c","Type":"ContainerStarted","Data":"b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4"} Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.438352 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.469858 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.493419 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.511718 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.525722 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.539630 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.551705 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.566965 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.579049 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.594138 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.606961 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.618501 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.630712 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.646651 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.659637 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.674085 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.687978 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.700934 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.715512 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.727846 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.741020 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.751816 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.769425 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.780283 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.796072 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.814281 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.835102 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.846702 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.863749 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.884835 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:20Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.902087 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:20 crc kubenswrapper[4910]: E0226 21:57:20.902227 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.902656 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:20 crc kubenswrapper[4910]: E0226 21:57:20.902745 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.902814 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:20 crc kubenswrapper[4910]: E0226 21:57:20.902918 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:20 crc kubenswrapper[4910]: I0226 21:57:20.912859 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 21:57:21 crc kubenswrapper[4910]: E0226 21:57:21.036247 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.430986 4910 generic.go:334] "Generic (PLEG): container finished" podID="a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c" containerID="b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4" exitCode=0 Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.431032 4910 generic.go:334] "Generic (PLEG): container finished" podID="a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c" containerID="597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1" exitCode=0 Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.431033 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" event={"ID":"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c","Type":"ContainerDied","Data":"b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4"} Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.431114 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" event={"ID":"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c","Type":"ContainerDied","Data":"597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1"} Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.438102 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a"} Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.438321 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401"} Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.438471 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c"} Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.438620 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2"} Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.438800 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad"} Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.438933 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca"} Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.452874 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.470336 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.486119 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.499917 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.511666 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.526949 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.545138 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.560013 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.576312 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.597749 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.628651 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.646813 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.666658 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.687000 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.709773 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.730719 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:21Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:21 crc kubenswrapper[4910]: I0226 21:57:21.901481 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:21 crc kubenswrapper[4910]: E0226 21:57:21.902054 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.443757 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-795gt" event={"ID":"d78660ec-f27f-43be-add6-8fab38329537","Type":"ContainerStarted","Data":"3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5"} Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.445890 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" event={"ID":"50dce6a7-297f-49b9-8994-bc73b6fb33a2","Type":"ContainerStarted","Data":"3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4"} Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.445976 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" event={"ID":"50dce6a7-297f-49b9-8994-bc73b6fb33a2","Type":"ContainerStarted","Data":"928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0"} Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.448869 4910 generic.go:334] "Generic (PLEG): container finished" podID="a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c" containerID="593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b" exitCode=0 Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.448925 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" event={"ID":"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c","Type":"ContainerDied","Data":"593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b"} Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.471422 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.491297 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.504784 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.523964 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.537927 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.558141 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.577564 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.589498 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.606751 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.624763 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.642512 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.655345 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.671112 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.688567 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.707592 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.715809 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.716034 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.716086 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.716247 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.716327 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:57:54.716297647 +0000 UTC m=+159.795788198 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.716387 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:54.716375479 +0000 UTC m=+159.795866030 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.716407 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.716488 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:54.716467102 +0000 UTC m=+159.795957683 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.729351 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.745695 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.765514 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.779518 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.792785 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.803380 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.814152 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.816572 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.816640 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.816756 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.816807 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.816821 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.816879 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:54.816860201 +0000 UTC m=+159.896350752 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.816777 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.816918 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.816933 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.816984 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:54.816969104 +0000 UTC m=+159.896459725 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.824724 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.836822 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.851108 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.864376 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.894467 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.900596 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.900678 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.900729 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.900780 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.900915 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:22 crc kubenswrapper[4910]: E0226 21:57:22.900991 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.911962 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.930339 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.945119 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.960788 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:22 crc kubenswrapper[4910]: I0226 21:57:22.974488 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:22Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.455363 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964"} Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.462851 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19"} Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.467016 4910 generic.go:334] "Generic (PLEG): container finished" podID="a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c" containerID="e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f" exitCode=0 Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.467088 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" event={"ID":"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c","Type":"ContainerDied","Data":"e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f"} Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.469920 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.472835 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905"} Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.472910 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b"} Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.490663 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.517049 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.535286 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.550359 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.572634 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.592040 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.606830 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.623570 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.640486 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.663867 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.677656 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.689370 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.703595 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.718022 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.730245 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.741902 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.752782 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.772358 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.785908 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.800088 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.815999 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.830978 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.846753 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.861706 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.876585 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.892310 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.901318 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:23 crc kubenswrapper[4910]: E0226 21:57:23.901572 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.904034 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.914092 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.925521 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.938314 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:23 crc kubenswrapper[4910]: I0226 21:57:23.951805 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:23Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.130088 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.130254 4910 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.130330 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs podName:9bd0ab20-beab-4d8b-90d0-ef5bd1c10526 nodeName:}" failed. No retries permitted until 2026-02-26 21:57:40.130313465 +0000 UTC m=+145.209804016 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs") pod "network-metrics-daemon-mhdkf" (UID: "9bd0ab20-beab-4d8b-90d0-ef5bd1c10526") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.482612 4910 generic.go:334] "Generic (PLEG): container finished" podID="a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c" containerID="47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e" exitCode=0 Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.482654 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" event={"ID":"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c","Type":"ContainerDied","Data":"47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e"} Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.509731 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.535756 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.552773 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.569762 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.583466 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.595126 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.604926 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.619963 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.620010 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.620020 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.620037 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.620049 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:24Z","lastTransitionTime":"2026-02-26T21:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.620640 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.631573 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.635535 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.639056 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.639099 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.639112 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.639132 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.639145 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:24Z","lastTransitionTime":"2026-02-26T21:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.644135 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.654328 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.656555 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.658815 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.658842 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.658851 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.658865 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.658876 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:24Z","lastTransitionTime":"2026-02-26T21:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.668595 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.672071 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.675311 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.675350 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.675362 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.675378 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.675390 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:24Z","lastTransitionTime":"2026-02-26T21:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.681405 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.686673 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.690738 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.690772 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.690782 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.690797 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.690808 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:24Z","lastTransitionTime":"2026-02-26T21:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.693122 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.704664 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.704870 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.708244 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.727633 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:24Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.900484 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.900685 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.900738 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:24 crc kubenswrapper[4910]: I0226 21:57:24.900769 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.900850 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:24 crc kubenswrapper[4910]: E0226 21:57:24.900955 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.490989 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m5cf2" event={"ID":"5680be55-6cf7-4a72-a5b8-4b49efe4a020","Type":"ContainerStarted","Data":"62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85"} Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.498088 4910 generic.go:334] "Generic (PLEG): container finished" podID="a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c" containerID="a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd" exitCode=0 Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.498139 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" event={"ID":"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c","Type":"ContainerDied","Data":"a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd"} Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.516673 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.540689 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.559781 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.581817 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.604973 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.630747 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.648083 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.666606 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.681151 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.696920 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.716716 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.731899 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.743311 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.762770 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.774142 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.784734 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.797852 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.808467 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.819950 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.829694 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.842299 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.865708 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.881178 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.893539 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.901201 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:25 crc kubenswrapper[4910]: E0226 21:57:25.901500 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.909648 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.928380 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.947561 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.967858 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.979844 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:25 crc kubenswrapper[4910]: I0226 21:57:25.997994 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.011531 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.025659 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: E0226 21:57:26.037219 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.043213 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.056588 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.072293 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.087532 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.102376 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.141356 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.162916 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.181611 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.199589 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.208816 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.221313 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.232640 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.243204 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.255746 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.267934 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.283847 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.509449 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" event={"ID":"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c","Type":"ContainerStarted","Data":"993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6"} Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.519073 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5"} Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.519436 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.532904 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.551798 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.554438 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.573203 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.595753 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.610814 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.637078 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.657483 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.667541 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.677701 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.688380 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.706698 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.721283 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.733346 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.750384 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.765988 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.780755 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.794739 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.806090 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.817062 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.827424 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.839767 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.863812 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.876344 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.891486 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.901194 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:26 crc kubenswrapper[4910]: E0226 21:57:26.901376 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.901203 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:26 crc kubenswrapper[4910]: E0226 21:57:26.901580 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.901199 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:26 crc kubenswrapper[4910]: E0226 21:57:26.901746 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.905123 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.919009 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.932992 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.942066 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.955274 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.968132 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.978463 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:26 crc kubenswrapper[4910]: I0226 21:57:26.995853 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.522964 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.523594 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.558448 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.579211 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.603514 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.635412 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.655374 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.689251 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.705690 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.726731 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.752697 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.776292 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.787131 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.801093 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.817717 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.827692 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.838609 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.849075 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.859441 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:27 crc kubenswrapper[4910]: I0226 21:57:27.901400 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:27 crc kubenswrapper[4910]: E0226 21:57:27.901558 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:28 crc kubenswrapper[4910]: I0226 21:57:28.900966 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:28 crc kubenswrapper[4910]: I0226 21:57:28.901026 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:28 crc kubenswrapper[4910]: I0226 21:57:28.901134 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:28 crc kubenswrapper[4910]: E0226 21:57:28.901129 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:28 crc kubenswrapper[4910]: E0226 21:57:28.901288 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:28 crc kubenswrapper[4910]: E0226 21:57:28.901440 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.532506 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/0.log" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.536318 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5" exitCode=1 Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.536548 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5"} Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.537592 4910 scope.go:117] "RemoveContainer" containerID="e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.560112 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.581754 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.601444 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.621141 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.641800 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.671138 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:29Z\\\",\\\"message\\\":\\\" removal\\\\nI0226 21:57:28.970440 6836 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 21:57:28.970459 6836 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 21:57:28.970453 6836 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 21:57:28.970489 6836 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:28.970462 6836 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:28.970462 6836 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 21:57:28.970510 6836 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 21:57:28.970514 6836 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 21:57:28.970494 6836 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:28.970567 6836 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 21:57:28.970588 6836 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:28.970604 6836 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:28.970612 6836 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:28.970660 6836 factory.go:656] Stopping watch factory\\\\nI0226 21:57:28.970665 6836 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:28.970689 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.692799 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.710235 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.730250 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.747230 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.770649 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.796740 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.814781 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.829857 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.847911 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.859966 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:29Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:29 crc kubenswrapper[4910]: I0226 21:57:29.900604 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:29 crc kubenswrapper[4910]: E0226 21:57:29.900765 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.542409 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/0.log" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.546021 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b"} Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.546787 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.561460 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.582310 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.598879 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.613280 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.635351 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.654110 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.667583 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.684028 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.696810 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.712074 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.729559 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.742232 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.753206 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.766339 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.779730 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.797551 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:29Z\\\",\\\"message\\\":\\\" removal\\\\nI0226 21:57:28.970440 6836 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 21:57:28.970459 6836 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 21:57:28.970453 6836 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 21:57:28.970489 6836 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:28.970462 6836 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:28.970462 6836 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 21:57:28.970510 6836 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 21:57:28.970514 6836 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 21:57:28.970494 6836 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:28.970567 6836 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 21:57:28.970588 6836 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:28.970604 6836 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:28.970612 6836 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:28.970660 6836 factory.go:656] Stopping watch factory\\\\nI0226 21:57:28.970665 6836 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:28.970689 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:30Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.900510 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.900546 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:30 crc kubenswrapper[4910]: I0226 21:57:30.900600 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:30 crc kubenswrapper[4910]: E0226 21:57:30.900665 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:30 crc kubenswrapper[4910]: E0226 21:57:30.900834 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:30 crc kubenswrapper[4910]: E0226 21:57:30.900904 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:31 crc kubenswrapper[4910]: E0226 21:57:31.038019 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.553516 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/1.log" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.554599 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/0.log" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.559770 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b" exitCode=1 Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.559831 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b"} Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.559897 4910 scope.go:117] "RemoveContainer" containerID="e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.560934 4910 scope.go:117] "RemoveContainer" containerID="eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b" Feb 26 21:57:31 crc kubenswrapper[4910]: E0226 21:57:31.561333 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.594694 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:29Z\\\",\\\"message\\\":\\\" removal\\\\nI0226 21:57:28.970440 6836 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 21:57:28.970459 6836 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 21:57:28.970453 6836 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 21:57:28.970489 6836 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:28.970462 6836 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:28.970462 6836 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 21:57:28.970510 6836 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 21:57:28.970514 6836 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 21:57:28.970494 6836 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:28.970567 6836 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 21:57:28.970588 6836 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:28.970604 6836 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:28.970612 6836 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:28.970660 6836 factory.go:656] Stopping watch factory\\\\nI0226 21:57:28.970665 6836 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:28.970689 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:30Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.716344 6997 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:30.716353 6997 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:30.716386 6997 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 21:57:30.716557 6997 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:30.716568 6997 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:30.716713 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717226 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717720 6997 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:30.717742 6997 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:30.717782 6997 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:30.717826 6997 factory.go:656] Stopping watch factory\\\\nI0226 21:57:30.717848 6997 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.614238 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.635990 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.653075 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.671150 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.690130 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.697111 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.712059 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.729660 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.746044 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.770976 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.786488 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.801667 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.820915 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.838504 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.857482 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.876724 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.896959 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.900698 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:31 crc kubenswrapper[4910]: E0226 21:57:31.900887 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.934391 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e83cebd594e10aea432fade0417f781fd888b5874ba9be8d401c39280293afa5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:29Z\\\",\\\"message\\\":\\\" removal\\\\nI0226 21:57:28.970440 6836 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 21:57:28.970459 6836 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 21:57:28.970453 6836 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 21:57:28.970489 6836 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:28.970462 6836 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:28.970462 6836 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 21:57:28.970510 6836 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 21:57:28.970514 6836 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 21:57:28.970494 6836 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:28.970567 6836 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 21:57:28.970588 6836 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:28.970604 6836 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:28.970612 6836 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:28.970660 6836 factory.go:656] Stopping watch factory\\\\nI0226 21:57:28.970665 6836 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:28.970689 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:30Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.716344 6997 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:30.716353 6997 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:30.716386 6997 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 21:57:30.716557 6997 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:30.716568 6997 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:30.716713 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717226 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717720 6997 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:30.717742 6997 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:30.717782 6997 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:30.717826 6997 factory.go:656] Stopping watch factory\\\\nI0226 21:57:30.717848 6997 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.959054 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:31 crc kubenswrapper[4910]: I0226 21:57:31.985534 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:31Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.008736 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.026609 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.046950 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.062559 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.080224 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.095389 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.118286 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.134070 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.149692 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.164997 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.180708 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.201525 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.566634 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/1.log" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.572469 4910 scope.go:117] "RemoveContainer" containerID="eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b" Feb 26 21:57:32 crc kubenswrapper[4910]: E0226 21:57:32.572730 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.592316 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.627075 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:30Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.716344 6997 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:30.716353 6997 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:30.716386 6997 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 21:57:30.716557 6997 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:30.716568 6997 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:30.716713 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717226 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717720 6997 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:30.717742 6997 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:30.717782 6997 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:30.717826 6997 factory.go:656] Stopping watch factory\\\\nI0226 21:57:30.717848 6997 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.641506 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.662865 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.679372 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.694579 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.707445 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.719466 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.733031 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.749153 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.774432 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.789742 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.805389 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.825710 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.842810 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.859784 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:32Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.900568 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.900636 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:32 crc kubenswrapper[4910]: I0226 21:57:32.900748 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:32 crc kubenswrapper[4910]: E0226 21:57:32.900890 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:32 crc kubenswrapper[4910]: E0226 21:57:32.901019 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:32 crc kubenswrapper[4910]: E0226 21:57:32.901219 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:33 crc kubenswrapper[4910]: I0226 21:57:33.901093 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:33 crc kubenswrapper[4910]: E0226 21:57:33.901332 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.844533 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.844598 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.844620 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.844645 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.844664 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:34Z","lastTransitionTime":"2026-02-26T21:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:34 crc kubenswrapper[4910]: E0226 21:57:34.865798 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:34Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.870477 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.870518 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.870535 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.870554 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.870566 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:34Z","lastTransitionTime":"2026-02-26T21:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:34 crc kubenswrapper[4910]: E0226 21:57:34.887729 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:34Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.891604 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.891645 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.891658 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.891679 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.891692 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:34Z","lastTransitionTime":"2026-02-26T21:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.900439 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.900509 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:34 crc kubenswrapper[4910]: E0226 21:57:34.900553 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.900443 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:34 crc kubenswrapper[4910]: E0226 21:57:34.900884 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:34 crc kubenswrapper[4910]: E0226 21:57:34.900974 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:34 crc kubenswrapper[4910]: E0226 21:57:34.903376 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:34Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.907009 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.907044 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.907052 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.907069 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.907099 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:34Z","lastTransitionTime":"2026-02-26T21:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:34 crc kubenswrapper[4910]: E0226 21:57:34.924192 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:34Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.927969 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.928000 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.928009 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.928022 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:34 crc kubenswrapper[4910]: I0226 21:57:34.928031 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:34Z","lastTransitionTime":"2026-02-26T21:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:34 crc kubenswrapper[4910]: E0226 21:57:34.938614 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:34Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:34 crc kubenswrapper[4910]: E0226 21:57:34.938729 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:57:35 crc kubenswrapper[4910]: I0226 21:57:35.900865 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:35 crc kubenswrapper[4910]: E0226 21:57:35.901075 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:35 crc kubenswrapper[4910]: I0226 21:57:35.927007 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:35Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:35 crc kubenswrapper[4910]: I0226 21:57:35.948887 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:35Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:35 crc kubenswrapper[4910]: I0226 21:57:35.969086 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:35Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:35 crc kubenswrapper[4910]: I0226 21:57:35.990673 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:35Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.010069 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: E0226 21:57:36.038850 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.045922 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:30Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.716344 6997 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:30.716353 6997 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:30.716386 6997 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 21:57:30.716557 6997 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:30.716568 6997 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:30.716713 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717226 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717720 6997 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:30.717742 6997 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:30.717782 6997 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:30.717826 6997 factory.go:656] Stopping watch factory\\\\nI0226 21:57:30.717848 6997 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.068727 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.086787 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.112348 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.132729 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.150120 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.167482 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.191114 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.210281 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.238365 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.257302 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:36Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.901425 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.901480 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:36 crc kubenswrapper[4910]: I0226 21:57:36.901608 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:36 crc kubenswrapper[4910]: E0226 21:57:36.901603 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:36 crc kubenswrapper[4910]: E0226 21:57:36.901753 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:36 crc kubenswrapper[4910]: E0226 21:57:36.901864 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:37 crc kubenswrapper[4910]: I0226 21:57:37.901391 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:37 crc kubenswrapper[4910]: E0226 21:57:37.901862 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:37 crc kubenswrapper[4910]: I0226 21:57:37.917483 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 26 21:57:38 crc kubenswrapper[4910]: I0226 21:57:38.900890 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:38 crc kubenswrapper[4910]: I0226 21:57:38.901038 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:38 crc kubenswrapper[4910]: E0226 21:57:38.901125 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:38 crc kubenswrapper[4910]: I0226 21:57:38.901142 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:38 crc kubenswrapper[4910]: E0226 21:57:38.901362 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:38 crc kubenswrapper[4910]: E0226 21:57:38.901518 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:39 crc kubenswrapper[4910]: I0226 21:57:39.900953 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:39 crc kubenswrapper[4910]: E0226 21:57:39.901321 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:40 crc kubenswrapper[4910]: I0226 21:57:40.215407 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:40 crc kubenswrapper[4910]: E0226 21:57:40.215639 4910 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:40 crc kubenswrapper[4910]: E0226 21:57:40.215760 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs podName:9bd0ab20-beab-4d8b-90d0-ef5bd1c10526 nodeName:}" failed. No retries permitted until 2026-02-26 21:58:12.215729346 +0000 UTC m=+177.295219937 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs") pod "network-metrics-daemon-mhdkf" (UID: "9bd0ab20-beab-4d8b-90d0-ef5bd1c10526") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:57:40 crc kubenswrapper[4910]: I0226 21:57:40.901430 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:40 crc kubenswrapper[4910]: I0226 21:57:40.901428 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:40 crc kubenswrapper[4910]: I0226 21:57:40.901462 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:40 crc kubenswrapper[4910]: E0226 21:57:40.901859 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:40 crc kubenswrapper[4910]: E0226 21:57:40.901621 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:40 crc kubenswrapper[4910]: E0226 21:57:40.902105 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:41 crc kubenswrapper[4910]: E0226 21:57:41.040079 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:57:41 crc kubenswrapper[4910]: I0226 21:57:41.901391 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:41 crc kubenswrapper[4910]: E0226 21:57:41.901655 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:41 crc kubenswrapper[4910]: I0226 21:57:41.917928 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 26 21:57:42 crc kubenswrapper[4910]: I0226 21:57:42.900868 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:42 crc kubenswrapper[4910]: I0226 21:57:42.900945 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:42 crc kubenswrapper[4910]: I0226 21:57:42.900998 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:42 crc kubenswrapper[4910]: E0226 21:57:42.901061 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:42 crc kubenswrapper[4910]: E0226 21:57:42.901207 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:42 crc kubenswrapper[4910]: E0226 21:57:42.901304 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:43 crc kubenswrapper[4910]: I0226 21:57:43.900779 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:43 crc kubenswrapper[4910]: E0226 21:57:43.900933 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:43 crc kubenswrapper[4910]: I0226 21:57:43.902412 4910 scope.go:117] "RemoveContainer" containerID="eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.617680 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/1.log" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.620800 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2"} Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.621332 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.643670 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.658928 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.673344 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.688822 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.706036 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.723081 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:30Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.716344 6997 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:30.716353 6997 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:30.716386 6997 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 21:57:30.716557 6997 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:30.716568 6997 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:30.716713 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717226 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717720 6997 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:30.717742 6997 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:30.717782 6997 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:30.717826 6997 factory.go:656] Stopping watch factory\\\\nI0226 21:57:30.717848 6997 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.736302 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.751841 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.765039 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.781085 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.797682 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.810480 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.830984 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.849878 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.863090 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.879268 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.892019 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.900420 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.900456 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.900493 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:44 crc kubenswrapper[4910]: E0226 21:57:44.900547 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:44 crc kubenswrapper[4910]: E0226 21:57:44.900719 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:44 crc kubenswrapper[4910]: E0226 21:57:44.900765 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:44 crc kubenswrapper[4910]: I0226 21:57:44.907654 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:44Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.047198 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.047260 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.047283 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.047315 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.047337 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:45Z","lastTransitionTime":"2026-02-26T21:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:45 crc kubenswrapper[4910]: E0226 21:57:45.067704 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.072803 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.072857 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.072875 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.072900 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.072921 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:45Z","lastTransitionTime":"2026-02-26T21:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:45 crc kubenswrapper[4910]: E0226 21:57:45.094112 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.098836 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.098897 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.098920 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.098944 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.098962 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:45Z","lastTransitionTime":"2026-02-26T21:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:45 crc kubenswrapper[4910]: E0226 21:57:45.118933 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.124297 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.124363 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.124382 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.124410 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.124431 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:45Z","lastTransitionTime":"2026-02-26T21:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:45 crc kubenswrapper[4910]: E0226 21:57:45.143883 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.148688 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.148741 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.148760 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.148788 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.148811 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:45Z","lastTransitionTime":"2026-02-26T21:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:45 crc kubenswrapper[4910]: E0226 21:57:45.169325 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: E0226 21:57:45.169566 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.627014 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/2.log" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.627729 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/1.log" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.630833 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2" exitCode=1 Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.630874 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2"} Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.630911 4910 scope.go:117] "RemoveContainer" containerID="eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.631643 4910 scope.go:117] "RemoveContainer" containerID="cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2" Feb 26 21:57:45 crc kubenswrapper[4910]: E0226 21:57:45.631849 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.648316 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.666966 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.701406 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:30Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.716344 6997 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:30.716353 6997 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:30.716386 6997 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 21:57:30.716557 6997 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:30.716568 6997 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:30.716713 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717226 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717720 6997 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:30.717742 6997 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:30.717782 6997 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:30.717826 6997 factory.go:656] Stopping watch factory\\\\nI0226 21:57:30.717848 6997 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:44Z\\\",\\\"message\\\":\\\"{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0226 21:57:44.823077 7169 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:44.823090 7169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 21:57:44.823104 7169 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 21:57:44.823150 7169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.720799 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.740918 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.758470 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.774901 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.795684 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.819488 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.840681 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.856761 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.871810 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.892703 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.901269 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:45 crc kubenswrapper[4910]: E0226 21:57:45.901464 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.910783 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.934730 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.953197 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.971830 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:45 crc kubenswrapper[4910]: I0226 21:57:45.989830 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:45Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.009327 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.024936 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: E0226 21:57:46.042499 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.049001 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.067858 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.081546 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.096152 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.144562 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.158993 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.174842 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.188654 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.211989 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb5c159d14e4c02da80e08a18a0206ffbea665925ac000fac1026505cf74df1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:30Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.716344 6997 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 21:57:30.716353 6997 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 21:57:30.716386 6997 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 21:57:30.716557 6997 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 21:57:30.716568 6997 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:30.716713 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717226 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:57:30.717720 6997 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 21:57:30.717742 6997 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 21:57:30.717782 6997 handler.go:208] Removed *v1.Node event handler 2\\\\nI0226 21:57:30.717826 6997 factory.go:656] Stopping watch factory\\\\nI0226 21:57:30.717848 6997 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:44Z\\\",\\\"message\\\":\\\"{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0226 21:57:44.823077 7169 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:44.823090 7169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 21:57:44.823104 7169 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 21:57:44.823150 7169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.225759 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.239368 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.256134 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.269185 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.281055 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.301402 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.317033 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.636951 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/2.log" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.642274 4910 scope.go:117] "RemoveContainer" containerID="cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2" Feb 26 21:57:46 crc kubenswrapper[4910]: E0226 21:57:46.642450 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.657884 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.677815 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.697660 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.733838 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:44Z\\\",\\\"message\\\":\\\"{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0226 21:57:44.823077 7169 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:44.823090 7169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 21:57:44.823104 7169 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 21:57:44.823150 7169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.751381 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.768605 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.788104 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.806360 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.824573 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.839659 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.852691 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.863583 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.875795 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.886199 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.901371 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.901351 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:46 crc kubenswrapper[4910]: E0226 21:57:46.901513 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.901394 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:46 crc kubenswrapper[4910]: E0226 21:57:46.901607 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:46 crc kubenswrapper[4910]: E0226 21:57:46.901651 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.906993 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.919619 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.933814 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:46 crc kubenswrapper[4910]: I0226 21:57:46.949745 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:46Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:47 crc kubenswrapper[4910]: I0226 21:57:47.901265 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:47 crc kubenswrapper[4910]: E0226 21:57:47.901480 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:48 crc kubenswrapper[4910]: I0226 21:57:48.901114 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:48 crc kubenswrapper[4910]: I0226 21:57:48.901114 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:48 crc kubenswrapper[4910]: E0226 21:57:48.901923 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:48 crc kubenswrapper[4910]: E0226 21:57:48.901818 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:48 crc kubenswrapper[4910]: I0226 21:57:48.901153 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:48 crc kubenswrapper[4910]: E0226 21:57:48.902011 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:49 crc kubenswrapper[4910]: I0226 21:57:49.901272 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:49 crc kubenswrapper[4910]: E0226 21:57:49.901444 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:50 crc kubenswrapper[4910]: I0226 21:57:50.900707 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:50 crc kubenswrapper[4910]: I0226 21:57:50.900812 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:50 crc kubenswrapper[4910]: E0226 21:57:50.900866 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:50 crc kubenswrapper[4910]: E0226 21:57:50.901026 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:50 crc kubenswrapper[4910]: I0226 21:57:50.901106 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:50 crc kubenswrapper[4910]: E0226 21:57:50.901221 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:51 crc kubenswrapper[4910]: E0226 21:57:51.044671 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:57:51 crc kubenswrapper[4910]: I0226 21:57:51.901293 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:51 crc kubenswrapper[4910]: E0226 21:57:51.901496 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:52 crc kubenswrapper[4910]: I0226 21:57:52.901267 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:52 crc kubenswrapper[4910]: I0226 21:57:52.901310 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:52 crc kubenswrapper[4910]: I0226 21:57:52.901267 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:52 crc kubenswrapper[4910]: E0226 21:57:52.901454 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:52 crc kubenswrapper[4910]: E0226 21:57:52.901575 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:52 crc kubenswrapper[4910]: E0226 21:57:52.901740 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:53 crc kubenswrapper[4910]: I0226 21:57:53.901566 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:53 crc kubenswrapper[4910]: E0226 21:57:53.903939 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:54 crc kubenswrapper[4910]: I0226 21:57:54.809334 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:57:54 crc kubenswrapper[4910]: I0226 21:57:54.809553 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.809572 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:58:58.809542573 +0000 UTC m=+223.889033144 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:57:54 crc kubenswrapper[4910]: I0226 21:57:54.809607 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.809734 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.809785 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.809807 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:58:58.80978995 +0000 UTC m=+223.889280521 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.809924 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 21:58:58.809888103 +0000 UTC m=+223.889378674 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:57:54 crc kubenswrapper[4910]: I0226 21:57:54.900982 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:54 crc kubenswrapper[4910]: I0226 21:57:54.901066 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:54 crc kubenswrapper[4910]: I0226 21:57:54.901080 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.901207 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.901329 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.901470 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:54 crc kubenswrapper[4910]: I0226 21:57:54.911079 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:54 crc kubenswrapper[4910]: I0226 21:57:54.911261 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.911403 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.911440 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.911463 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.911543 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.911618 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.911645 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.911552 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 21:58:58.911522513 +0000 UTC m=+223.991013104 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:54 crc kubenswrapper[4910]: E0226 21:57:54.911775 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 21:58:58.91173561 +0000 UTC m=+223.991226211 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.518410 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.518458 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.518469 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.518487 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.518500 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:55Z","lastTransitionTime":"2026-02-26T21:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:55 crc kubenswrapper[4910]: E0226 21:57:55.537860 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.542936 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.543005 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.543021 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.543043 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.543060 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:55Z","lastTransitionTime":"2026-02-26T21:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:55 crc kubenswrapper[4910]: E0226 21:57:55.562266 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.566847 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.566892 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.566903 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.566920 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.566931 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:55Z","lastTransitionTime":"2026-02-26T21:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:55 crc kubenswrapper[4910]: E0226 21:57:55.585629 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.590474 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.590547 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.590565 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.590589 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.590606 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:55Z","lastTransitionTime":"2026-02-26T21:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:55 crc kubenswrapper[4910]: E0226 21:57:55.611892 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.617077 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.617115 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.617126 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.617143 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.617155 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:57:55Z","lastTransitionTime":"2026-02-26T21:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:57:55 crc kubenswrapper[4910]: E0226 21:57:55.635216 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: E0226 21:57:55.635497 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.901331 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:55 crc kubenswrapper[4910]: E0226 21:57:55.901646 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.921039 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.934440 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.948286 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.962072 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.973424 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:55 crc kubenswrapper[4910]: I0226 21:57:55.989770 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:55Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.005570 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.022475 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.033850 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: E0226 21:57:56.046228 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.048273 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.065671 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.074895 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.086730 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.104053 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.113721 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.123723 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.138940 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.182960 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:44Z\\\",\\\"message\\\":\\\"{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0226 21:57:44.823077 7169 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:44.823090 7169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 21:57:44.823104 7169 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 21:57:44.823150 7169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:57:56Z is after 2025-08-24T17:21:41Z" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.900541 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.900541 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:56 crc kubenswrapper[4910]: I0226 21:57:56.900752 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:56 crc kubenswrapper[4910]: E0226 21:57:56.900843 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:56 crc kubenswrapper[4910]: E0226 21:57:56.901010 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:56 crc kubenswrapper[4910]: E0226 21:57:56.901092 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:57 crc kubenswrapper[4910]: I0226 21:57:57.901231 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:57 crc kubenswrapper[4910]: E0226 21:57:57.901477 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:57:57 crc kubenswrapper[4910]: I0226 21:57:57.903693 4910 scope.go:117] "RemoveContainer" containerID="cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2" Feb 26 21:57:57 crc kubenswrapper[4910]: E0226 21:57:57.904093 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:57:58 crc kubenswrapper[4910]: I0226 21:57:58.900776 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:57:58 crc kubenswrapper[4910]: I0226 21:57:58.900863 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:57:58 crc kubenswrapper[4910]: E0226 21:57:58.900929 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:57:58 crc kubenswrapper[4910]: I0226 21:57:58.900968 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:57:58 crc kubenswrapper[4910]: E0226 21:57:58.901208 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:57:58 crc kubenswrapper[4910]: E0226 21:57:58.901256 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:57:59 crc kubenswrapper[4910]: I0226 21:57:59.901441 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:57:59 crc kubenswrapper[4910]: E0226 21:57:59.901657 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:00 crc kubenswrapper[4910]: I0226 21:58:00.901250 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:00 crc kubenswrapper[4910]: I0226 21:58:00.901282 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:00 crc kubenswrapper[4910]: I0226 21:58:00.901393 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:00 crc kubenswrapper[4910]: E0226 21:58:00.901570 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:00 crc kubenswrapper[4910]: E0226 21:58:00.901725 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:00 crc kubenswrapper[4910]: E0226 21:58:00.901968 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:01 crc kubenswrapper[4910]: E0226 21:58:01.047609 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:01 crc kubenswrapper[4910]: I0226 21:58:01.901589 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:01 crc kubenswrapper[4910]: E0226 21:58:01.901980 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:02 crc kubenswrapper[4910]: I0226 21:58:02.901509 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:02 crc kubenswrapper[4910]: I0226 21:58:02.901562 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:02 crc kubenswrapper[4910]: I0226 21:58:02.901590 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:02 crc kubenswrapper[4910]: E0226 21:58:02.901740 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:02 crc kubenswrapper[4910]: E0226 21:58:02.901869 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:02 crc kubenswrapper[4910]: E0226 21:58:02.901999 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:03 crc kubenswrapper[4910]: I0226 21:58:03.901572 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:03 crc kubenswrapper[4910]: E0226 21:58:03.901823 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:04 crc kubenswrapper[4910]: I0226 21:58:04.901315 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:04 crc kubenswrapper[4910]: I0226 21:58:04.901355 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:04 crc kubenswrapper[4910]: I0226 21:58:04.901374 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:04 crc kubenswrapper[4910]: E0226 21:58:04.901489 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:04 crc kubenswrapper[4910]: E0226 21:58:04.901706 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:04 crc kubenswrapper[4910]: E0226 21:58:04.902045 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:04 crc kubenswrapper[4910]: I0226 21:58:04.917848 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.900607 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:05 crc kubenswrapper[4910]: E0226 21:58:05.900873 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.918718 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:05Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.951216 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc9a75-85ad-446d-a4c6-43f7ef0bf304\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4babf71eea3ac8c428ccc06dd30d6050c38c2ca1db1369bea420ee6f22a1c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3133f944884e882b5f9ef27a231c66d5dc875ce598f6f873800068d8d91d1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d5c8e707107c8468c8c93dad9ab2ac1942031a7d44ca608d617ad624b776d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0c9fe8f846e6307700a6e78bb8af0ce159b62ff979b434b4520792296601f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9363ec915adb6ccea5cef83bea6f316ef62406876e85e4bd8f9169f713e9dedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:05Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.965106 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:05Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.977033 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.977079 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.977097 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.977120 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.977137 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:05Z","lastTransitionTime":"2026-02-26T21:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:05 crc kubenswrapper[4910]: I0226 21:58:05.982427 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:05Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:05 crc kubenswrapper[4910]: E0226 21:58:05.998718 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:05Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.004365 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.004435 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.004463 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.004513 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.004541 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:06Z","lastTransitionTime":"2026-02-26T21:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.006905 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.021818 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: E0226 21:58:06.027014 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.031904 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.031947 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.031965 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.031987 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.032005 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:06Z","lastTransitionTime":"2026-02-26T21:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.040761 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: E0226 21:58:06.048612 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:06 crc kubenswrapper[4910]: E0226 21:58:06.056004 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.062268 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.062326 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.062344 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.062367 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.062384 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:06Z","lastTransitionTime":"2026-02-26T21:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.064843 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.079902 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: E0226 21:58:06.083177 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.087021 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.087062 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.087073 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.087092 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.087104 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:06Z","lastTransitionTime":"2026-02-26T21:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.096619 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: E0226 21:58:06.101936 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: E0226 21:58:06.102085 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.112242 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.143695 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:44Z\\\",\\\"message\\\":\\\"{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0226 21:57:44.823077 7169 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:44.823090 7169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 21:57:44.823104 7169 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 21:57:44.823150 7169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.166302 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.183148 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.198841 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.216761 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.239031 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.252796 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.266809 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:06Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.901568 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.901658 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:06 crc kubenswrapper[4910]: I0226 21:58:06.901568 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:06 crc kubenswrapper[4910]: E0226 21:58:06.901810 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:06 crc kubenswrapper[4910]: E0226 21:58:06.901934 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:06 crc kubenswrapper[4910]: E0226 21:58:06.902087 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:07 crc kubenswrapper[4910]: I0226 21:58:07.901273 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:07 crc kubenswrapper[4910]: E0226 21:58:07.901479 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:08 crc kubenswrapper[4910]: I0226 21:58:08.901306 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:08 crc kubenswrapper[4910]: I0226 21:58:08.901384 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:08 crc kubenswrapper[4910]: I0226 21:58:08.901377 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:08 crc kubenswrapper[4910]: E0226 21:58:08.901496 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:08 crc kubenswrapper[4910]: E0226 21:58:08.902129 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:08 crc kubenswrapper[4910]: E0226 21:58:08.902253 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:08 crc kubenswrapper[4910]: I0226 21:58:08.902918 4910 scope.go:117] "RemoveContainer" containerID="cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.184422 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/2.log" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.188218 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43"} Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.189275 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.191587 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/0.log" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.191712 4910 generic.go:334] "Generic (PLEG): container finished" podID="d78660ec-f27f-43be-add6-8fab38329537" containerID="3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5" exitCode=1 Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.191823 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-795gt" event={"ID":"d78660ec-f27f-43be-add6-8fab38329537","Type":"ContainerDied","Data":"3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5"} Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.192309 4910 scope.go:117] "RemoveContainer" containerID="3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.215358 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc9a75-85ad-446d-a4c6-43f7ef0bf304\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4babf71eea3ac8c428ccc06dd30d6050c38c2ca1db1369bea420ee6f22a1c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3133f944884e882b5f9ef27a231c66d5dc875ce598f6f873800068d8d91d1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d5c8e707107c8468c8c93dad9ab2ac1942031a7d44ca608d617ad624b776d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0c9fe8f846e6307700a6e78bb8af0ce159b62ff979b434b4520792296601f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9363ec915adb6ccea5cef83bea6f316ef62406876e85e4bd8f9169f713e9dedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.228954 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.239326 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.252178 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.264261 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.279908 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.293375 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.306803 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.323509 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.344027 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.359769 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.402266 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:44Z\\\",\\\"message\\\":\\\"{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0226 21:57:44.823077 7169 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:44.823090 7169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 21:57:44.823104 7169 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 21:57:44.823150 7169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.460649 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.479565 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.501351 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.517368 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.529880 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.546701 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.558830 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.575069 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:08Z\\\",\\\"message\\\":\\\"2026-02-26T21:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461\\\\n2026-02-26T21:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461 to /host/opt/cni/bin/\\\\n2026-02-26T21:57:23Z [verbose] multus-daemon started\\\\n2026-02-26T21:57:23Z [verbose] Readiness Indicator file check\\\\n2026-02-26T21:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.587213 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.603132 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.621685 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.641495 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.657058 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.670154 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.683894 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.719115 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc9a75-85ad-446d-a4c6-43f7ef0bf304\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4babf71eea3ac8c428ccc06dd30d6050c38c2ca1db1369bea420ee6f22a1c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3133f944884e882b5f9ef27a231c66d5dc875ce598f6f873800068d8d91d1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d5c8e707107c8468c8c93dad9ab2ac1942031a7d44ca608d617ad624b776d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0c9fe8f846e6307700a6e78bb8af0ce159b62ff979b434b4520792296601f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9363ec915adb6ccea5cef83bea6f316ef62406876e85e4bd8f9169f713e9dedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.733215 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.746753 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.763304 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.778540 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.799112 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.820031 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.834426 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.846649 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.857945 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.877205 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:44Z\\\",\\\"message\\\":\\\"{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0226 21:57:44.823077 7169 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:44.823090 7169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 21:57:44.823104 7169 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 21:57:44.823150 7169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:09Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:09 crc kubenswrapper[4910]: I0226 21:58:09.900450 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:09 crc kubenswrapper[4910]: E0226 21:58:09.900598 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.198762 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/0.log" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.198927 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-795gt" event={"ID":"d78660ec-f27f-43be-add6-8fab38329537","Type":"ContainerStarted","Data":"3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf"} Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.202533 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/3.log" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.203405 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/2.log" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.207902 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43" exitCode=1 Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.207955 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43"} Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.208018 4910 scope.go:117] "RemoveContainer" containerID="cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.209513 4910 scope.go:117] "RemoveContainer" containerID="c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43" Feb 26 21:58:10 crc kubenswrapper[4910]: E0226 21:58:10.210021 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.222602 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.242787 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.261662 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.281351 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.299882 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.331108 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:44Z\\\",\\\"message\\\":\\\"{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0226 21:57:44.823077 7169 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:44.823090 7169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 21:57:44.823104 7169 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 21:57:44.823150 7169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.348667 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.367843 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.389766 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.410044 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.428807 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.451321 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:08Z\\\",\\\"message\\\":\\\"2026-02-26T21:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461\\\\n2026-02-26T21:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461 to /host/opt/cni/bin/\\\\n2026-02-26T21:57:23Z [verbose] multus-daemon started\\\\n2026-02-26T21:57:23Z [verbose] Readiness Indicator file check\\\\n2026-02-26T21:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.469306 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.497324 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc9a75-85ad-446d-a4c6-43f7ef0bf304\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4babf71eea3ac8c428ccc06dd30d6050c38c2ca1db1369bea420ee6f22a1c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3133f944884e882b5f9ef27a231c66d5dc875ce598f6f873800068d8d91d1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d5c8e707107c8468c8c93dad9ab2ac1942031a7d44ca608d617ad624b776d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0c9fe8f846e6307700a6e78bb8af0ce159b62ff979b434b4520792296601f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9363ec915adb6ccea5cef83bea6f316ef62406876e85e4bd8f9169f713e9dedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.521124 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.539352 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.569115 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.584962 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.600983 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.618541 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.648698 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc9a75-85ad-446d-a4c6-43f7ef0bf304\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4babf71eea3ac8c428ccc06dd30d6050c38c2ca1db1369bea420ee6f22a1c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3133f944884e882b5f9ef27a231c66d5dc875ce598f6f873800068d8d91d1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d5c8e707107c8468c8c93dad9ab2ac1942031a7d44ca608d617ad624b776d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0c9fe8f846e6307700a6e78bb8af0ce159b62ff979b434b4520792296601f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9363ec915adb6ccea5cef83bea6f316ef62406876e85e4bd8f9169f713e9dedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.665475 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.680483 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.703603 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.718433 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.738360 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.756229 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.775127 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.789237 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.805855 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.836724 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cac2d6aaa5106136a9d33f4186500bbb2e5edeeed8dce782cbe0ea4b1fed69d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:57:44Z\\\",\\\"message\\\":\\\"{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0226 21:57:44.823077 7169 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 21:57:44.823090 7169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0226 21:57:44.823104 7169 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0226 21:57:44.823150 7169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:09Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.884516 7449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 21:58:09.884686 7449 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.884926 7449 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.885068 7449 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.885234 7449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:58:09.885998 7449 factory.go:656] Stopping watch factory\\\\nI0226 21:58:09.916653 7449 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0226 21:58:09.916692 7449 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0226 21:58:09.916759 7449 ovnkube.go:599] Stopped ovnkube\\\\nI0226 21:58:09.916790 7449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 21:58:09.916901 7449 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.857043 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:08Z\\\",\\\"message\\\":\\\"2026-02-26T21:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461\\\\n2026-02-26T21:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461 to /host/opt/cni/bin/\\\\n2026-02-26T21:57:23Z [verbose] multus-daemon started\\\\n2026-02-26T21:57:23Z [verbose] Readiness Indicator file check\\\\n2026-02-26T21:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.870849 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.887290 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.901374 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.901410 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.901426 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:10 crc kubenswrapper[4910]: E0226 21:58:10.901584 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:10 crc kubenswrapper[4910]: E0226 21:58:10.901778 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:10 crc kubenswrapper[4910]: E0226 21:58:10.902370 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.905147 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.923939 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.943289 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:10 crc kubenswrapper[4910]: I0226 21:58:10.957927 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:10Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: E0226 21:58:11.051455 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.215337 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/3.log" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.220711 4910 scope.go:117] "RemoveContainer" containerID="c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43" Feb 26 21:58:11 crc kubenswrapper[4910]: E0226 21:58:11.220973 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.241350 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.261273 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.283152 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.303880 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.334983 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:09Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.884516 7449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 21:58:09.884686 7449 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.884926 7449 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.885068 7449 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.885234 7449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:58:09.885998 7449 factory.go:656] Stopping watch factory\\\\nI0226 21:58:09.916653 7449 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0226 21:58:09.916692 7449 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0226 21:58:09.916759 7449 ovnkube.go:599] Stopped ovnkube\\\\nI0226 21:58:09.916790 7449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 21:58:09.916901 7449 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.355968 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.377620 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.398922 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.424030 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.439696 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.455480 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:08Z\\\",\\\"message\\\":\\\"2026-02-26T21:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461\\\\n2026-02-26T21:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461 to /host/opt/cni/bin/\\\\n2026-02-26T21:57:23Z [verbose] multus-daemon started\\\\n2026-02-26T21:57:23Z [verbose] Readiness Indicator file check\\\\n2026-02-26T21:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.471673 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.493431 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.515866 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.530855 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.548069 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.559365 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.570489 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.595261 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc9a75-85ad-446d-a4c6-43f7ef0bf304\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4babf71eea3ac8c428ccc06dd30d6050c38c2ca1db1369bea420ee6f22a1c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3133f944884e882b5f9ef27a231c66d5dc875ce598f6f873800068d8d91d1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d5c8e707107c8468c8c93dad9ab2ac1942031a7d44ca608d617ad624b776d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0c9fe8f846e6307700a6e78bb8af0ce159b62ff979b434b4520792296601f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9363ec915adb6ccea5cef83bea6f316ef62406876e85e4bd8f9169f713e9dedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:11Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:11 crc kubenswrapper[4910]: I0226 21:58:11.901035 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:11 crc kubenswrapper[4910]: E0226 21:58:11.901277 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:12 crc kubenswrapper[4910]: I0226 21:58:12.218844 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:12 crc kubenswrapper[4910]: E0226 21:58:12.219053 4910 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:58:12 crc kubenswrapper[4910]: E0226 21:58:12.219142 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs podName:9bd0ab20-beab-4d8b-90d0-ef5bd1c10526 nodeName:}" failed. No retries permitted until 2026-02-26 21:59:16.219115613 +0000 UTC m=+241.298606184 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs") pod "network-metrics-daemon-mhdkf" (UID: "9bd0ab20-beab-4d8b-90d0-ef5bd1c10526") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 21:58:12 crc kubenswrapper[4910]: I0226 21:58:12.901479 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:12 crc kubenswrapper[4910]: I0226 21:58:12.901564 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:12 crc kubenswrapper[4910]: I0226 21:58:12.901571 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:12 crc kubenswrapper[4910]: E0226 21:58:12.901661 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:12 crc kubenswrapper[4910]: E0226 21:58:12.901846 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:12 crc kubenswrapper[4910]: E0226 21:58:12.901957 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:13 crc kubenswrapper[4910]: I0226 21:58:13.901565 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:13 crc kubenswrapper[4910]: E0226 21:58:13.901854 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:14 crc kubenswrapper[4910]: I0226 21:58:14.901253 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:14 crc kubenswrapper[4910]: I0226 21:58:14.901342 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:14 crc kubenswrapper[4910]: I0226 21:58:14.901266 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:14 crc kubenswrapper[4910]: E0226 21:58:14.901481 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:14 crc kubenswrapper[4910]: E0226 21:58:14.901584 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:14 crc kubenswrapper[4910]: E0226 21:58:14.901669 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:15 crc kubenswrapper[4910]: I0226 21:58:15.901616 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:15 crc kubenswrapper[4910]: E0226 21:58:15.901836 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:15 crc kubenswrapper[4910]: I0226 21:58:15.923487 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:08Z\\\",\\\"message\\\":\\\"2026-02-26T21:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461\\\\n2026-02-26T21:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461 to /host/opt/cni/bin/\\\\n2026-02-26T21:57:23Z [verbose] multus-daemon started\\\\n2026-02-26T21:57:23Z [verbose] Readiness Indicator file check\\\\n2026-02-26T21:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:15Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:15 crc kubenswrapper[4910]: I0226 21:58:15.942321 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:15Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:15 crc kubenswrapper[4910]: I0226 21:58:15.961798 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:15Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:15 crc kubenswrapper[4910]: I0226 21:58:15.980673 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:15Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.003047 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.023490 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.041496 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.052403 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.060665 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.095515 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc9a75-85ad-446d-a4c6-43f7ef0bf304\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4babf71eea3ac8c428ccc06dd30d6050c38c2ca1db1369bea420ee6f22a1c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3133f944884e882b5f9ef27a231c66d5dc875ce598f6f873800068d8d91d1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d5c8e707107c8468c8c93dad9ab2ac1942031a7d44ca608d617ad624b776d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0c9fe8f846e6307700a6e78bb8af0ce159b62ff979b434b4520792296601f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9363ec915adb6ccea5cef83bea6f316ef62406876e85e4bd8f9169f713e9dedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.116042 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.132791 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.152892 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.167549 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.189315 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.206442 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.221376 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.237096 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.254054 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.282246 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:09Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.884516 7449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 21:58:09.884686 7449 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.884926 7449 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.885068 7449 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.885234 7449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:58:09.885998 7449 factory.go:656] Stopping watch factory\\\\nI0226 21:58:09.916653 7449 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0226 21:58:09.916692 7449 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0226 21:58:09.916759 7449 ovnkube.go:599] Stopped ovnkube\\\\nI0226 21:58:09.916790 7449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 21:58:09.916901 7449 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.452857 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.452920 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.452929 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.452944 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.452954 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:16Z","lastTransitionTime":"2026-02-26T21:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.471487 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.475629 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.475657 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.475665 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.475679 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.475691 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:16Z","lastTransitionTime":"2026-02-26T21:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.494178 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.498579 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.498641 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.498664 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.498694 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.498713 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:16Z","lastTransitionTime":"2026-02-26T21:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.518959 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.523888 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.523950 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.523972 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.524002 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.524022 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:16Z","lastTransitionTime":"2026-02-26T21:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.542679 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.547570 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.547621 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.547637 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.547659 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.547671 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:16Z","lastTransitionTime":"2026-02-26T21:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.567846 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:16Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.568103 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.900534 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.900550 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.900731 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:16 crc kubenswrapper[4910]: I0226 21:58:16.900550 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.900833 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:16 crc kubenswrapper[4910]: E0226 21:58:16.900884 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:17 crc kubenswrapper[4910]: I0226 21:58:17.901569 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:17 crc kubenswrapper[4910]: E0226 21:58:17.901771 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:18 crc kubenswrapper[4910]: I0226 21:58:18.901514 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:18 crc kubenswrapper[4910]: I0226 21:58:18.901677 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:18 crc kubenswrapper[4910]: I0226 21:58:18.901695 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:18 crc kubenswrapper[4910]: E0226 21:58:18.902093 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:18 crc kubenswrapper[4910]: E0226 21:58:18.902539 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:18 crc kubenswrapper[4910]: E0226 21:58:18.902706 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:19 crc kubenswrapper[4910]: I0226 21:58:19.901298 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:19 crc kubenswrapper[4910]: E0226 21:58:19.901504 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:20 crc kubenswrapper[4910]: I0226 21:58:20.900992 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:20 crc kubenswrapper[4910]: E0226 21:58:20.901246 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:20 crc kubenswrapper[4910]: I0226 21:58:20.901407 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:20 crc kubenswrapper[4910]: E0226 21:58:20.901601 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:20 crc kubenswrapper[4910]: I0226 21:58:20.902903 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:20 crc kubenswrapper[4910]: E0226 21:58:20.903121 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:21 crc kubenswrapper[4910]: E0226 21:58:21.053579 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:21 crc kubenswrapper[4910]: I0226 21:58:21.901143 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:21 crc kubenswrapper[4910]: E0226 21:58:21.901414 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:22 crc kubenswrapper[4910]: I0226 21:58:22.901000 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:22 crc kubenswrapper[4910]: I0226 21:58:22.901016 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:22 crc kubenswrapper[4910]: I0226 21:58:22.901202 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:22 crc kubenswrapper[4910]: E0226 21:58:22.901418 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:22 crc kubenswrapper[4910]: E0226 21:58:22.901738 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:22 crc kubenswrapper[4910]: I0226 21:58:22.903231 4910 scope.go:117] "RemoveContainer" containerID="c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43" Feb 26 21:58:22 crc kubenswrapper[4910]: E0226 21:58:22.903500 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:58:22 crc kubenswrapper[4910]: E0226 21:58:22.903783 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:23 crc kubenswrapper[4910]: I0226 21:58:23.901648 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:23 crc kubenswrapper[4910]: E0226 21:58:23.902631 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:24 crc kubenswrapper[4910]: I0226 21:58:24.901027 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:24 crc kubenswrapper[4910]: I0226 21:58:24.901096 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:24 crc kubenswrapper[4910]: E0226 21:58:24.901268 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:24 crc kubenswrapper[4910]: I0226 21:58:24.901304 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:24 crc kubenswrapper[4910]: E0226 21:58:24.901607 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:24 crc kubenswrapper[4910]: E0226 21:58:24.901703 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:25 crc kubenswrapper[4910]: I0226 21:58:25.901124 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:25 crc kubenswrapper[4910]: E0226 21:58:25.901469 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:25 crc kubenswrapper[4910]: I0226 21:58:25.923040 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50dce6a7-297f-49b9-8994-bc73b6fb33a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://928646aedf6b87c1dad7df9a87fbf4e8872c35966acc9d7ccfb27c3e398b2af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9080180911f7a61dc6aa2c6aecf77ead390da5209d135c2eb133b0e9f95df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b8l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mnrdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:25 crc kubenswrapper[4910]: I0226 21:58:25.945609 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2001cfd98750b58238a7fec47d69c3b329fef3c50056ba357a96a6d285c68659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:25 crc kubenswrapper[4910]: I0226 21:58:25.964303 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:25 crc kubenswrapper[4910]: I0226 21:58:25.993505 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41cb54c7-260b-42d4-8ae9-cf2a195721be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:09Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.884516 7449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 21:58:09.884686 7449 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.884926 7449 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.885068 7449 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 21:58:09.885234 7449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 21:58:09.885998 7449 factory.go:656] Stopping watch factory\\\\nI0226 21:58:09.916653 7449 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0226 21:58:09.916692 7449 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0226 21:58:09.916759 7449 ovnkube.go:599] Stopped ovnkube\\\\nI0226 21:58:09.916790 7449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0226 21:58:09.916901 7449 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txf8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrq4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:25Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.014278 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cbce70d-0117-43ec-9a6e-7e35e701b098\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80560b826c851cffafca95665f474ba2768adb51d1a19072019cf0671126b70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36082d4b6081d2e40e727728735f4bfef4037924080f79065d0021839c0e48fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4014678499ab40ae1b7e77637a3cdc5bea50c49444e1e31a939525a280c92b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dfeeebef5bbc91ee59fb965df473f2e1c5eba0d809fea16d6901f178af94009\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.032739 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: E0226 21:58:26.054943 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.063000 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ed1156e-3afd-4214-8184-33b187a1b2a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 21:56:27.511962 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 21:56:27.512712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 21:56:27.514260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1493588644/tls.crt::/tmp/serving-cert-1493588644/tls.key\\\\\\\"\\\\nI0226 21:56:27.678421 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 21:56:27.685664 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 21:56:27.685697 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 21:56:27.685734 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 21:56:27.685744 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 21:56:27.692797 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0226 21:56:27.692809 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 21:56:27.692855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692870 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 21:56:27.692882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 21:56:27.692890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0226 21:56:27.692897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 21:56:27.692904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 21:56:27.694030 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:56:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.085608 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18c395d5d004a8107eb9a6267a836774a018d5d0dff7106b1a5db57aa887a514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd41824ead87ca6426db7848ebc4157fc8e5109e41db6b94834a3d6fc3416a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.100840 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a156c2f1a9999424ad02c589efd48c3a40329c524f8d6a19578b1f367bf0e964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.120005 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-795gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78660ec-f27f-43be-add6-8fab38329537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T21:58:08Z\\\",\\\"message\\\":\\\"2026-02-26T21:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461\\\\n2026-02-26T21:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44f94011-387b-4a09-8095-a7407a6bc461 to /host/opt/cni/bin/\\\\n2026-02-26T21:57:23Z [verbose] multus-daemon started\\\\n2026-02-26T21:57:23Z [verbose] Readiness Indicator file check\\\\n2026-02-26T21:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkcjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-795gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.140507 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69251a00-4e6e-48f6-ae1b-d3001d22b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77ce2f229a2f211483de5951d54a264f42c151c94f4d868107cb052402ba905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glfzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6xpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.159062 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82f712df-7f8f-4304-a47a-9ffdfa591bb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98e73558dfd4fa356a62499de610e53d1b16c3ab9d402622b419e79b56d17f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d23e15eb07388a7f15fba6b894f5fd0f55e31ff3f4120cb161e2ff8bff246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.174717 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8a13781-4e1b-4adb-9cc1-13429c1271d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e48c9b7f7bf5d94ac47531e1fe19bb941e3fe6f8021659885fc524fef9df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a40ccdd9349631fd981b22379e818c212d9c104da690ac6546fd45b33b1f5ddc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T21:56:19Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 21:55:48.966417 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 21:55:48.967781 1 observer_polling.go:159] Starting file observer\\\\nI0226 21:55:48.969194 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 21:55:48.970106 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 21:56:16.298126 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nF0226 21:56:19.287453 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:56:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e83b5f52098333208ceae9cb71f1600f0b28e2567f791320e835a5611d83ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f404d374a3a5c9d1fca5b21888d2af718a36d5c02ba8bf2590209a401e879840\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.191638 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m5cf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5680be55-6cf7-4a72-a5b8-4b49efe4a020\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62690a5e9fe2ce5d23ac823646261163a1c898472ebd1c7f139144ac39ce4e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8555\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m5cf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.211097 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht47v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5b52f6d-a85a-4cbb-96a7-45c3b2ed492c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://993862b5f932440fc94110cb4a30c95bef39e1a4f56cef640f2c66a60238e9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40bbe1194eeac7a4e47b0e978d9b9589ec842ce32e09592a6c4e1801e9153a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597c734f9b24b1545c59c515da470d881e223ebeb93f164568f58c08a9a05dd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://593a9e5a03577c08f0934458faf1897905ab149498b0fc296777bec95259625b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e8bdaeefdb8344830dd0f068e2d95b48f98ebddfdafd9d3d42c9300b60968f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e051a98337bb5927072a3b241a55f1412f896f94d94b30caf63d639748b47e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9cccb4f414e69f4fa977e508635b6ff780dc8514e4a4da57626adaeae98d6dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fhj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:56:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht47v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.225964 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zbq6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02ab3935-85f7-493a-b88e-205f5018e5d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816a4940f785763ebefa50437d115fa4eb3d8830032ed2fffb67fd2c6f674ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zbqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zbq6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.241800 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:57:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qfz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:57:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhdkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.275055 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7fc9a75-85ad-446d-a4c6-43f7ef0bf304\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T21:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4babf71eea3ac8c428ccc06dd30d6050c38c2ca1db1369bea420ee6f22a1c8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3133f944884e882b5f9ef27a231c66d5dc875ce598f6f873800068d8d91d1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29d5c8e707107c8468c8c93dad9ab2ac1942031a7d44ca608d617ad624b776d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0c9fe8f846e6307700a6e78bb8af0ce159b62ff979b434b4520792296601f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9363ec915adb6ccea5cef83bea6f316ef62406876e85e4bd8f9169f713e9dedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T21:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea207ddeb9a6daf519e2619d3ad80f296ae17918bff4c72951f73721e967ce0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2e05875f42fb6588e0c11f3b26b5af8c76fb2fac4c7a7349cdea6674b741fb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c33b689254614e509f24729309ddd9daf0cd25dde03554323435ca8b2f46696\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T21:55:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T21:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T21:55:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.291368 4910 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T21:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.900854 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.900982 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:26 crc kubenswrapper[4910]: E0226 21:58:26.901075 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.901148 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:26 crc kubenswrapper[4910]: E0226 21:58:26.901387 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:26 crc kubenswrapper[4910]: E0226 21:58:26.901615 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.922765 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.922837 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.922861 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.922892 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.922914 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:26Z","lastTransitionTime":"2026-02-26T21:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:26 crc kubenswrapper[4910]: E0226 21:58:26.945995 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.951318 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.951383 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.951395 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.951413 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.951425 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:26Z","lastTransitionTime":"2026-02-26T21:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:26 crc kubenswrapper[4910]: E0226 21:58:26.970009 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.974630 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.974678 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.974696 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.974723 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:26 crc kubenswrapper[4910]: I0226 21:58:26.974741 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:26Z","lastTransitionTime":"2026-02-26T21:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:26 crc kubenswrapper[4910]: E0226 21:58:26.995398 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:26Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.000667 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.000743 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.000766 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.000797 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.000818 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:27Z","lastTransitionTime":"2026-02-26T21:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:27 crc kubenswrapper[4910]: E0226 21:58:27.016058 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.021150 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.021252 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.021269 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.021294 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.021311 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:27Z","lastTransitionTime":"2026-02-26T21:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:27 crc kubenswrapper[4910]: E0226 21:58:27.041381 4910 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T21:58:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aee21706-93ed-49c2-8be6-5ac437ca1d73\\\",\\\"systemUUID\\\":\\\"5c941e7c-dc2b-467c-aace-fa09e4c41edd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T21:58:27Z is after 2025-08-24T17:21:41Z" Feb 26 21:58:27 crc kubenswrapper[4910]: E0226 21:58:27.041690 4910 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 21:58:27 crc kubenswrapper[4910]: I0226 21:58:27.901234 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:27 crc kubenswrapper[4910]: E0226 21:58:27.901484 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:28 crc kubenswrapper[4910]: I0226 21:58:28.900777 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:28 crc kubenswrapper[4910]: I0226 21:58:28.900857 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:28 crc kubenswrapper[4910]: I0226 21:58:28.900777 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:28 crc kubenswrapper[4910]: E0226 21:58:28.900977 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:28 crc kubenswrapper[4910]: E0226 21:58:28.901138 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:28 crc kubenswrapper[4910]: E0226 21:58:28.901261 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:29 crc kubenswrapper[4910]: I0226 21:58:29.901368 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:29 crc kubenswrapper[4910]: E0226 21:58:29.901596 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:30 crc kubenswrapper[4910]: I0226 21:58:30.901301 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:30 crc kubenswrapper[4910]: I0226 21:58:30.901323 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:30 crc kubenswrapper[4910]: E0226 21:58:30.901509 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:30 crc kubenswrapper[4910]: E0226 21:58:30.901673 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:30 crc kubenswrapper[4910]: I0226 21:58:30.901330 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:30 crc kubenswrapper[4910]: E0226 21:58:30.901845 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:31 crc kubenswrapper[4910]: E0226 21:58:31.056255 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:31 crc kubenswrapper[4910]: I0226 21:58:31.900886 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:31 crc kubenswrapper[4910]: E0226 21:58:31.901094 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:32 crc kubenswrapper[4910]: I0226 21:58:32.901431 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:32 crc kubenswrapper[4910]: I0226 21:58:32.901467 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:32 crc kubenswrapper[4910]: I0226 21:58:32.901551 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:32 crc kubenswrapper[4910]: E0226 21:58:32.901626 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:32 crc kubenswrapper[4910]: E0226 21:58:32.901719 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:32 crc kubenswrapper[4910]: E0226 21:58:32.901860 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:33 crc kubenswrapper[4910]: I0226 21:58:33.901304 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:33 crc kubenswrapper[4910]: E0226 21:58:33.901465 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:33 crc kubenswrapper[4910]: I0226 21:58:33.902428 4910 scope.go:117] "RemoveContainer" containerID="c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43" Feb 26 21:58:33 crc kubenswrapper[4910]: E0226 21:58:33.902640 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:58:34 crc kubenswrapper[4910]: I0226 21:58:34.904311 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:34 crc kubenswrapper[4910]: I0226 21:58:34.904438 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:34 crc kubenswrapper[4910]: E0226 21:58:34.904545 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:34 crc kubenswrapper[4910]: E0226 21:58:34.904676 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:34 crc kubenswrapper[4910]: I0226 21:58:34.904810 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:34 crc kubenswrapper[4910]: E0226 21:58:34.904924 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:35 crc kubenswrapper[4910]: I0226 21:58:35.901652 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:35 crc kubenswrapper[4910]: E0226 21:58:35.901930 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:35 crc kubenswrapper[4910]: I0226 21:58:35.998287 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=75.998254347 podStartE2EDuration="1m15.998254347s" podCreationTimestamp="2026-02-26 21:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:35.998118943 +0000 UTC m=+201.077609484" watchObservedRunningTime="2026-02-26 21:58:35.998254347 +0000 UTC m=+201.077744928" Feb 26 21:58:35 crc kubenswrapper[4910]: I0226 21:58:35.998536 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mnrdx" podStartSLOduration=130.998525255 podStartE2EDuration="2m10.998525255s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:35.975058281 +0000 UTC m=+201.054548822" watchObservedRunningTime="2026-02-26 21:58:35.998525255 +0000 UTC m=+201.078015826" Feb 26 21:58:36 crc kubenswrapper[4910]: E0226 21:58:36.056678 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.115747 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podStartSLOduration=132.115727149 podStartE2EDuration="2m12.115727149s" podCreationTimestamp="2026-02-26 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:36.115653647 +0000 UTC m=+201.195144198" watchObservedRunningTime="2026-02-26 21:58:36.115727149 +0000 UTC m=+201.195217700" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.116105 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-795gt" podStartSLOduration=131.11609975 podStartE2EDuration="2m11.11609975s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:36.1007704 +0000 UTC m=+201.180260961" watchObservedRunningTime="2026-02-26 21:58:36.11609975 +0000 UTC m=+201.195590301" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.141874 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=55.14186344 podStartE2EDuration="55.14186344s" podCreationTimestamp="2026-02-26 21:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:36.141350244 +0000 UTC m=+201.220840795" watchObservedRunningTime="2026-02-26 21:58:36.14186344 +0000 UTC m=+201.221353991" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.183032 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=59.183011051 podStartE2EDuration="59.183011051s" podCreationTimestamp="2026-02-26 21:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:36.163891471 +0000 UTC m=+201.243382032" watchObservedRunningTime="2026-02-26 21:58:36.183011051 +0000 UTC m=+201.262501592" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.183248 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=106.183243977 podStartE2EDuration="1m46.183243977s" podCreationTimestamp="2026-02-26 21:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:36.183136004 +0000 UTC m=+201.262626555" watchObservedRunningTime="2026-02-26 21:58:36.183243977 +0000 UTC m=+201.262734518" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.200247 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ht47v" podStartSLOduration=131.200226484 podStartE2EDuration="2m11.200226484s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:36.200101041 +0000 UTC m=+201.279591612" watchObservedRunningTime="2026-02-26 21:58:36.200226484 +0000 UTC m=+201.279717045" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.214729 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-zbq6c" podStartSLOduration=132.214709471 podStartE2EDuration="2m12.214709471s" podCreationTimestamp="2026-02-26 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:36.21364905 +0000 UTC m=+201.293139611" watchObservedRunningTime="2026-02-26 21:58:36.214709471 +0000 UTC m=+201.294200032" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.246847 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=32.246830423 podStartE2EDuration="32.246830423s" podCreationTimestamp="2026-02-26 21:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:36.245488744 +0000 UTC m=+201.324979285" watchObservedRunningTime="2026-02-26 21:58:36.246830423 +0000 UTC m=+201.326320964" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.286121 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-m5cf2" podStartSLOduration=132.28609 podStartE2EDuration="2m12.28609s" podCreationTimestamp="2026-02-26 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:36.285680607 +0000 UTC m=+201.365171218" watchObservedRunningTime="2026-02-26 21:58:36.28609 +0000 UTC m=+201.365580581" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.901371 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.901412 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:36 crc kubenswrapper[4910]: E0226 21:58:36.901671 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:36 crc kubenswrapper[4910]: E0226 21:58:36.902090 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:36 crc kubenswrapper[4910]: I0226 21:58:36.902450 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:36 crc kubenswrapper[4910]: E0226 21:58:36.902726 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.263386 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.264268 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.264306 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.264333 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.264351 4910 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T21:58:37Z","lastTransitionTime":"2026-02-26T21:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.328044 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q"] Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.328549 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.331800 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.332274 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.332275 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.333382 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.504232 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.504308 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.504345 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.504504 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.504564 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.605274 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.605343 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.605375 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.605405 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.605497 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.605589 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.605619 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.607234 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.614846 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.636355 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dec0e4f-482f-49bb-9f37-5d2cb74290a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rgd7q\" (UID: \"2dec0e4f-482f-49bb-9f37-5d2cb74290a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.647496 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" Feb 26 21:58:37 crc kubenswrapper[4910]: W0226 21:58:37.675508 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dec0e4f_482f_49bb_9f37_5d2cb74290a4.slice/crio-1e7c6b4c737bf5dfdea3c1d8e0088fd0733d15f0dc63bdb18598b3c39d9a4086 WatchSource:0}: Error finding container 1e7c6b4c737bf5dfdea3c1d8e0088fd0733d15f0dc63bdb18598b3c39d9a4086: Status 404 returned error can't find the container with id 1e7c6b4c737bf5dfdea3c1d8e0088fd0733d15f0dc63bdb18598b3c39d9a4086 Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.901297 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:37 crc kubenswrapper[4910]: E0226 21:58:37.901854 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.938211 4910 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 26 21:58:37 crc kubenswrapper[4910]: I0226 21:58:37.949258 4910 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 21:58:38 crc kubenswrapper[4910]: I0226 21:58:38.318123 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" event={"ID":"2dec0e4f-482f-49bb-9f37-5d2cb74290a4","Type":"ContainerStarted","Data":"97757712144c2837e02b40360545b9918ff45d4c3375598d5b68f0f06c1a4ab7"} Feb 26 21:58:38 crc kubenswrapper[4910]: I0226 21:58:38.318253 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" event={"ID":"2dec0e4f-482f-49bb-9f37-5d2cb74290a4","Type":"ContainerStarted","Data":"1e7c6b4c737bf5dfdea3c1d8e0088fd0733d15f0dc63bdb18598b3c39d9a4086"} Feb 26 21:58:38 crc kubenswrapper[4910]: I0226 21:58:38.338362 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgd7q" podStartSLOduration=133.338337969 podStartE2EDuration="2m13.338337969s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:38.337549807 +0000 UTC m=+203.417040388" watchObservedRunningTime="2026-02-26 21:58:38.338337969 +0000 UTC m=+203.417828540" Feb 26 21:58:38 crc kubenswrapper[4910]: I0226 21:58:38.901511 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:38 crc kubenswrapper[4910]: I0226 21:58:38.901697 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:38 crc kubenswrapper[4910]: I0226 21:58:38.901752 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:38 crc kubenswrapper[4910]: E0226 21:58:38.901851 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:38 crc kubenswrapper[4910]: E0226 21:58:38.901991 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:38 crc kubenswrapper[4910]: E0226 21:58:38.902035 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:39 crc kubenswrapper[4910]: I0226 21:58:39.901600 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:39 crc kubenswrapper[4910]: E0226 21:58:39.901824 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:40 crc kubenswrapper[4910]: I0226 21:58:40.901369 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:40 crc kubenswrapper[4910]: I0226 21:58:40.901423 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:40 crc kubenswrapper[4910]: I0226 21:58:40.901497 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:40 crc kubenswrapper[4910]: E0226 21:58:40.901616 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:40 crc kubenswrapper[4910]: E0226 21:58:40.901796 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:40 crc kubenswrapper[4910]: E0226 21:58:40.902083 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:41 crc kubenswrapper[4910]: E0226 21:58:41.058520 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:41 crc kubenswrapper[4910]: I0226 21:58:41.901032 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:41 crc kubenswrapper[4910]: E0226 21:58:41.901286 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:42 crc kubenswrapper[4910]: I0226 21:58:42.900816 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:42 crc kubenswrapper[4910]: I0226 21:58:42.900878 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:42 crc kubenswrapper[4910]: I0226 21:58:42.900816 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:42 crc kubenswrapper[4910]: E0226 21:58:42.901000 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:42 crc kubenswrapper[4910]: E0226 21:58:42.901116 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:42 crc kubenswrapper[4910]: E0226 21:58:42.901255 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:43 crc kubenswrapper[4910]: I0226 21:58:43.901591 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:43 crc kubenswrapper[4910]: E0226 21:58:43.901863 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:44 crc kubenswrapper[4910]: I0226 21:58:44.901518 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:44 crc kubenswrapper[4910]: I0226 21:58:44.901628 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:44 crc kubenswrapper[4910]: I0226 21:58:44.901524 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:44 crc kubenswrapper[4910]: E0226 21:58:44.901746 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:44 crc kubenswrapper[4910]: E0226 21:58:44.901877 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:44 crc kubenswrapper[4910]: E0226 21:58:44.901992 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:45 crc kubenswrapper[4910]: I0226 21:58:45.901211 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:45 crc kubenswrapper[4910]: E0226 21:58:45.904128 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:45 crc kubenswrapper[4910]: I0226 21:58:45.904611 4910 scope.go:117] "RemoveContainer" containerID="c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43" Feb 26 21:58:45 crc kubenswrapper[4910]: E0226 21:58:45.905084 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrq4q_openshift-ovn-kubernetes(41cb54c7-260b-42d4-8ae9-cf2a195721be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" Feb 26 21:58:46 crc kubenswrapper[4910]: E0226 21:58:46.059208 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:46 crc kubenswrapper[4910]: I0226 21:58:46.901001 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:46 crc kubenswrapper[4910]: I0226 21:58:46.901027 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:46 crc kubenswrapper[4910]: I0226 21:58:46.901252 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:46 crc kubenswrapper[4910]: E0226 21:58:46.901295 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:46 crc kubenswrapper[4910]: E0226 21:58:46.901468 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:46 crc kubenswrapper[4910]: E0226 21:58:46.901732 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:47 crc kubenswrapper[4910]: I0226 21:58:47.901044 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:47 crc kubenswrapper[4910]: E0226 21:58:47.901491 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:48 crc kubenswrapper[4910]: I0226 21:58:48.900930 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:48 crc kubenswrapper[4910]: I0226 21:58:48.901039 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:48 crc kubenswrapper[4910]: E0226 21:58:48.901127 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:48 crc kubenswrapper[4910]: I0226 21:58:48.901227 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:48 crc kubenswrapper[4910]: E0226 21:58:48.901405 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:48 crc kubenswrapper[4910]: E0226 21:58:48.901513 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:49 crc kubenswrapper[4910]: I0226 21:58:49.901581 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:49 crc kubenswrapper[4910]: E0226 21:58:49.901804 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:50 crc kubenswrapper[4910]: I0226 21:58:50.900810 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:50 crc kubenswrapper[4910]: E0226 21:58:50.901017 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:50 crc kubenswrapper[4910]: I0226 21:58:50.900857 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:50 crc kubenswrapper[4910]: E0226 21:58:50.901139 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:50 crc kubenswrapper[4910]: I0226 21:58:50.900857 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:50 crc kubenswrapper[4910]: E0226 21:58:50.901274 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:51 crc kubenswrapper[4910]: E0226 21:58:51.061096 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:51 crc kubenswrapper[4910]: I0226 21:58:51.900839 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:51 crc kubenswrapper[4910]: E0226 21:58:51.901092 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:52 crc kubenswrapper[4910]: I0226 21:58:52.900629 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:52 crc kubenswrapper[4910]: I0226 21:58:52.900749 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:52 crc kubenswrapper[4910]: I0226 21:58:52.900639 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:52 crc kubenswrapper[4910]: E0226 21:58:52.900808 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:52 crc kubenswrapper[4910]: E0226 21:58:52.900936 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:52 crc kubenswrapper[4910]: E0226 21:58:52.901012 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:53 crc kubenswrapper[4910]: I0226 21:58:53.900541 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:53 crc kubenswrapper[4910]: E0226 21:58:53.900788 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:54 crc kubenswrapper[4910]: I0226 21:58:54.900908 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:54 crc kubenswrapper[4910]: I0226 21:58:54.900992 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:54 crc kubenswrapper[4910]: I0226 21:58:54.900942 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:54 crc kubenswrapper[4910]: E0226 21:58:54.901285 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:54 crc kubenswrapper[4910]: E0226 21:58:54.901656 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:54 crc kubenswrapper[4910]: E0226 21:58:54.902218 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:55 crc kubenswrapper[4910]: I0226 21:58:55.379634 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/1.log" Feb 26 21:58:55 crc kubenswrapper[4910]: I0226 21:58:55.380529 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/0.log" Feb 26 21:58:55 crc kubenswrapper[4910]: I0226 21:58:55.380601 4910 generic.go:334] "Generic (PLEG): container finished" podID="d78660ec-f27f-43be-add6-8fab38329537" containerID="3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf" exitCode=1 Feb 26 21:58:55 crc kubenswrapper[4910]: I0226 21:58:55.380645 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-795gt" event={"ID":"d78660ec-f27f-43be-add6-8fab38329537","Type":"ContainerDied","Data":"3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf"} Feb 26 21:58:55 crc kubenswrapper[4910]: I0226 21:58:55.380702 4910 scope.go:117] "RemoveContainer" containerID="3fd8a04556ea084d2a318147015c6f7d90032781c508221c3ceae3f2b79375e5" Feb 26 21:58:55 crc kubenswrapper[4910]: I0226 21:58:55.381775 4910 scope.go:117] "RemoveContainer" containerID="3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf" Feb 26 21:58:55 crc kubenswrapper[4910]: E0226 21:58:55.382140 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-795gt_openshift-multus(d78660ec-f27f-43be-add6-8fab38329537)\"" pod="openshift-multus/multus-795gt" podUID="d78660ec-f27f-43be-add6-8fab38329537" Feb 26 21:58:55 crc kubenswrapper[4910]: I0226 21:58:55.901291 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:55 crc kubenswrapper[4910]: E0226 21:58:55.902994 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:56 crc kubenswrapper[4910]: E0226 21:58:56.062371 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:58:56 crc kubenswrapper[4910]: I0226 21:58:56.386546 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/1.log" Feb 26 21:58:56 crc kubenswrapper[4910]: I0226 21:58:56.901492 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:56 crc kubenswrapper[4910]: I0226 21:58:56.901543 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:56 crc kubenswrapper[4910]: I0226 21:58:56.901574 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:56 crc kubenswrapper[4910]: E0226 21:58:56.902389 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:56 crc kubenswrapper[4910]: E0226 21:58:56.902541 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:56 crc kubenswrapper[4910]: E0226 21:58:56.902710 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:57 crc kubenswrapper[4910]: I0226 21:58:57.900633 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:57 crc kubenswrapper[4910]: E0226 21:58:57.900866 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:58 crc kubenswrapper[4910]: I0226 21:58:58.866968 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:58:58 crc kubenswrapper[4910]: I0226 21:58:58.867209 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:58 crc kubenswrapper[4910]: I0226 21:58:58.867258 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.867301 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 22:01:00.867250834 +0000 UTC m=+345.946741385 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.867345 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.867409 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.867433 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 22:01:00.867410018 +0000 UTC m=+345.946900600 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.867538 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 22:01:00.867504641 +0000 UTC m=+345.946995272 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 21:58:58 crc kubenswrapper[4910]: I0226 21:58:58.901391 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:58:58 crc kubenswrapper[4910]: I0226 21:58:58.901502 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.901625 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:58:58 crc kubenswrapper[4910]: I0226 21:58:58.901645 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.902748 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.903101 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:58:58 crc kubenswrapper[4910]: I0226 21:58:58.903428 4910 scope.go:117] "RemoveContainer" containerID="c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43" Feb 26 21:58:58 crc kubenswrapper[4910]: I0226 21:58:58.968103 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:58:58 crc kubenswrapper[4910]: I0226 21:58:58.968235 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.968481 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.968519 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.968538 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.968531 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.968611 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.968640 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.968611 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 22:01:00.968587764 +0000 UTC m=+346.048078335 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:58:58 crc kubenswrapper[4910]: E0226 21:58:58.968787 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 22:01:00.968752518 +0000 UTC m=+346.048243099 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 21:58:59 crc kubenswrapper[4910]: I0226 21:58:59.414065 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/3.log" Feb 26 21:58:59 crc kubenswrapper[4910]: I0226 21:58:59.418815 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerStarted","Data":"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579"} Feb 26 21:58:59 crc kubenswrapper[4910]: I0226 21:58:59.419362 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:58:59 crc kubenswrapper[4910]: I0226 21:58:59.463382 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podStartSLOduration=154.463356506 podStartE2EDuration="2m34.463356506s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:58:59.46107462 +0000 UTC m=+224.540565191" watchObservedRunningTime="2026-02-26 21:58:59.463356506 +0000 UTC m=+224.542847077" Feb 26 21:58:59 crc kubenswrapper[4910]: I0226 21:58:59.901279 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:58:59 crc kubenswrapper[4910]: E0226 21:58:59.901575 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:58:59 crc kubenswrapper[4910]: I0226 21:58:59.988802 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mhdkf"] Feb 26 21:59:00 crc kubenswrapper[4910]: I0226 21:59:00.423823 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:00 crc kubenswrapper[4910]: E0226 21:59:00.424474 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:59:00 crc kubenswrapper[4910]: I0226 21:59:00.900530 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:59:00 crc kubenswrapper[4910]: I0226 21:59:00.900571 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:59:00 crc kubenswrapper[4910]: I0226 21:59:00.900601 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:59:00 crc kubenswrapper[4910]: E0226 21:59:00.900662 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:59:00 crc kubenswrapper[4910]: E0226 21:59:00.900769 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:59:00 crc kubenswrapper[4910]: E0226 21:59:00.900854 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:59:01 crc kubenswrapper[4910]: E0226 21:59:01.063797 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:59:01 crc kubenswrapper[4910]: I0226 21:59:01.900760 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:01 crc kubenswrapper[4910]: E0226 21:59:01.900951 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:59:02 crc kubenswrapper[4910]: I0226 21:59:02.901033 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:59:02 crc kubenswrapper[4910]: I0226 21:59:02.901123 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:59:02 crc kubenswrapper[4910]: I0226 21:59:02.901280 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:59:02 crc kubenswrapper[4910]: E0226 21:59:02.901268 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:59:02 crc kubenswrapper[4910]: E0226 21:59:02.901474 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:59:02 crc kubenswrapper[4910]: E0226 21:59:02.901582 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:59:03 crc kubenswrapper[4910]: I0226 21:59:03.901114 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:03 crc kubenswrapper[4910]: E0226 21:59:03.901368 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:59:04 crc kubenswrapper[4910]: I0226 21:59:04.900919 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:59:04 crc kubenswrapper[4910]: I0226 21:59:04.900934 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:59:04 crc kubenswrapper[4910]: E0226 21:59:04.901125 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:59:04 crc kubenswrapper[4910]: I0226 21:59:04.900964 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:59:04 crc kubenswrapper[4910]: E0226 21:59:04.901381 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:59:04 crc kubenswrapper[4910]: E0226 21:59:04.901601 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:59:05 crc kubenswrapper[4910]: I0226 21:59:05.901356 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:05 crc kubenswrapper[4910]: E0226 21:59:05.903427 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:59:06 crc kubenswrapper[4910]: E0226 21:59:06.064395 4910 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 21:59:06 crc kubenswrapper[4910]: I0226 21:59:06.900486 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:59:06 crc kubenswrapper[4910]: I0226 21:59:06.900543 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:59:06 crc kubenswrapper[4910]: I0226 21:59:06.900566 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:59:06 crc kubenswrapper[4910]: E0226 21:59:06.900640 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:59:06 crc kubenswrapper[4910]: E0226 21:59:06.900792 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:59:06 crc kubenswrapper[4910]: E0226 21:59:06.900924 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:59:06 crc kubenswrapper[4910]: I0226 21:59:06.901734 4910 scope.go:117] "RemoveContainer" containerID="3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf" Feb 26 21:59:07 crc kubenswrapper[4910]: I0226 21:59:07.457084 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/1.log" Feb 26 21:59:07 crc kubenswrapper[4910]: I0226 21:59:07.457618 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-795gt" event={"ID":"d78660ec-f27f-43be-add6-8fab38329537","Type":"ContainerStarted","Data":"0206f2babef31f4c9359fa5e49447fa3c2c463f5dfd690dac95da1a45bea19e3"} Feb 26 21:59:07 crc kubenswrapper[4910]: I0226 21:59:07.901646 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:07 crc kubenswrapper[4910]: E0226 21:59:07.901842 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:59:08 crc kubenswrapper[4910]: I0226 21:59:08.901435 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:59:08 crc kubenswrapper[4910]: I0226 21:59:08.901455 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:59:08 crc kubenswrapper[4910]: E0226 21:59:08.901710 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:59:08 crc kubenswrapper[4910]: I0226 21:59:08.901455 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:59:08 crc kubenswrapper[4910]: E0226 21:59:08.901781 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:59:08 crc kubenswrapper[4910]: E0226 21:59:08.901807 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:59:09 crc kubenswrapper[4910]: I0226 21:59:09.901142 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:09 crc kubenswrapper[4910]: E0226 21:59:09.901484 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhdkf" podUID="9bd0ab20-beab-4d8b-90d0-ef5bd1c10526" Feb 26 21:59:10 crc kubenswrapper[4910]: I0226 21:59:10.900826 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:59:10 crc kubenswrapper[4910]: I0226 21:59:10.900888 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:59:10 crc kubenswrapper[4910]: I0226 21:59:10.900888 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:59:10 crc kubenswrapper[4910]: E0226 21:59:10.901046 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 21:59:10 crc kubenswrapper[4910]: E0226 21:59:10.901231 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 21:59:10 crc kubenswrapper[4910]: E0226 21:59:10.901380 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 21:59:11 crc kubenswrapper[4910]: I0226 21:59:11.901072 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:11 crc kubenswrapper[4910]: I0226 21:59:11.904527 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 21:59:11 crc kubenswrapper[4910]: I0226 21:59:11.905427 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 21:59:12 crc kubenswrapper[4910]: I0226 21:59:12.900688 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 21:59:12 crc kubenswrapper[4910]: I0226 21:59:12.900749 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 21:59:12 crc kubenswrapper[4910]: I0226 21:59:12.901083 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 21:59:12 crc kubenswrapper[4910]: I0226 21:59:12.902973 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 21:59:12 crc kubenswrapper[4910]: I0226 21:59:12.903764 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 21:59:12 crc kubenswrapper[4910]: I0226 21:59:12.903858 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 21:59:12 crc kubenswrapper[4910]: I0226 21:59:12.904147 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 21:59:16 crc kubenswrapper[4910]: I0226 21:59:16.276885 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:16 crc kubenswrapper[4910]: I0226 21:59:16.279293 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 21:59:16 crc kubenswrapper[4910]: I0226 21:59:16.299332 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bd0ab20-beab-4d8b-90d0-ef5bd1c10526-metrics-certs\") pod \"network-metrics-daemon-mhdkf\" (UID: \"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526\") " pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:16 crc kubenswrapper[4910]: I0226 21:59:16.428404 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 21:59:16 crc kubenswrapper[4910]: I0226 21:59:16.435495 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhdkf" Feb 26 21:59:16 crc kubenswrapper[4910]: I0226 21:59:16.688006 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mhdkf"] Feb 26 21:59:17 crc kubenswrapper[4910]: I0226 21:59:17.499343 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" event={"ID":"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526","Type":"ContainerStarted","Data":"f642d0888b92681825b44b368508ee2f1e66f52394910758e136699a00b192af"} Feb 26 21:59:17 crc kubenswrapper[4910]: I0226 21:59:17.499690 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" event={"ID":"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526","Type":"ContainerStarted","Data":"ada4ed09e38b0adebb96b999a04fc2084112188052b2e3afa3d19792a41c6f58"} Feb 26 21:59:17 crc kubenswrapper[4910]: I0226 21:59:17.499714 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mhdkf" event={"ID":"9bd0ab20-beab-4d8b-90d0-ef5bd1c10526","Type":"ContainerStarted","Data":"2aca7d8a61828f9cba5424518a0b8d95bcd4dd7e537cb88b6cf7749b33f36989"} Feb 26 21:59:17 crc kubenswrapper[4910]: I0226 21:59:17.521881 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mhdkf" podStartSLOduration=172.521850024 podStartE2EDuration="2m52.521850024s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:17.521493423 +0000 UTC m=+242.600984004" watchObservedRunningTime="2026-02-26 21:59:17.521850024 +0000 UTC m=+242.601340595" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.357088 4910 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.404429 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kz5fx"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.405029 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.405721 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.406464 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.407214 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bw8qw"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.407659 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.407962 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.408241 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.408884 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.408940 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.411681 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.411970 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.412036 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.411563 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.411979 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.412364 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.412801 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.413016 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.413280 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.418228 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.418233 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.418658 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.420491 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.420691 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.421448 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.422998 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.423293 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.423717 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.424511 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.429985 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.430678 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.430912 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6222h"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.431497 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.432927 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nxzt6"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.433381 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.435439 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.435899 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.443332 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vdp5g"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.443901 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.444577 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.444919 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.445914 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.446037 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.446259 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.446412 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.446621 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.446741 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.446855 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.447091 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-c7wmc"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.447295 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.447332 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.447470 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.447589 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.447712 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.465841 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.467615 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.467802 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-kj9s2"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.468041 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.468186 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.468426 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.468476 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.468567 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.468643 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.468716 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.468852 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.468955 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.469029 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.469137 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.469348 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.469550 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.469903 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.470013 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.469905 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.470065 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.470940 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.470975 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.471193 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.471207 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.471299 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.471403 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.471516 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.471661 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.471824 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.471831 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.471907 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.472085 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.472320 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.472601 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.472696 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.472605 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7g2dk"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.472886 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.473135 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.487748 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.488109 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.490270 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.490497 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.490765 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.490926 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.491233 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.493254 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sh8rh"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.493932 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2hscq"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.494381 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hscq" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.494541 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dgm55"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.494751 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500647 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f69bf6d-80ca-4042-8be3-cf335b4a13f4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c7mg2\" (UID: \"7f69bf6d-80ca-4042-8be3-cf335b4a13f4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500676 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1db40f1b-b714-4920-8b27-f350b3dd2978-node-pullsecrets\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500696 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1db40f1b-b714-4920-8b27-f350b3dd2978-encryption-config\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500725 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-audit\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500743 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500758 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1db40f1b-b714-4920-8b27-f350b3dd2978-etcd-client\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500771 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grtc8\" (UniqueName: \"kubernetes.io/projected/dbbce4f0-e239-41ed-98b6-b5b84a303b34-kube-api-access-grtc8\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500796 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db40f1b-b714-4920-8b27-f350b3dd2978-serving-cert\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500811 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-config\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500825 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrwsw\" (UniqueName: \"kubernetes.io/projected/1db40f1b-b714-4920-8b27-f350b3dd2978-kube-api-access-qrwsw\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500841 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-client-ca\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500855 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500870 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-etcd-serving-ca\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500886 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-config\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500900 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-image-import-ca\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500916 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1db40f1b-b714-4920-8b27-f350b3dd2978-audit-dir\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500930 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vjt\" (UniqueName: \"kubernetes.io/projected/7f69bf6d-80ca-4042-8be3-cf335b4a13f4-kube-api-access-42vjt\") pod \"openshift-apiserver-operator-796bbdcf4f-c7mg2\" (UID: \"7f69bf6d-80ca-4042-8be3-cf335b4a13f4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500946 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f69bf6d-80ca-4042-8be3-cf335b4a13f4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c7mg2\" (UID: \"7f69bf6d-80ca-4042-8be3-cf335b4a13f4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.500963 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbce4f0-e239-41ed-98b6-b5b84a303b34-serving-cert\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.507097 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.507595 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nxzt6"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.507684 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kz5fx"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.507751 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bw8qw"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.507807 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.507935 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.508363 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.508691 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.515208 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.519929 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520189 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520466 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520551 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520587 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520681 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520725 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520767 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520841 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520889 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520976 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521014 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521108 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521222 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521313 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521372 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521415 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521488 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520504 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521542 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.520505 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521580 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521224 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521714 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.521864 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.522024 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.523619 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.531868 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.532368 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.532618 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.533289 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.534993 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.535191 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-298fw"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.541638 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.542083 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.542247 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.542347 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.542806 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.543378 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.543895 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vdp5g"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.544783 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.545276 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.561950 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.568194 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.569942 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.574500 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.582602 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.582683 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.586824 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wzqgc"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.591622 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8n798"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.591685 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.593521 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.595409 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.596916 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602011 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602048 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebb647d3-e0ce-4122-8582-d1a2d4d2c594-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2fs85\" (UID: \"ebb647d3-e0ce-4122-8582-d1a2d4d2c594\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602090 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mktx\" (UniqueName: \"kubernetes.io/projected/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-kube-api-access-7mktx\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602123 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65br6\" (UniqueName: \"kubernetes.io/projected/63954f23-8000-4ada-8d5d-67297b7c26f6-kube-api-access-65br6\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602149 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc4fz\" (UniqueName: \"kubernetes.io/projected/30057c46-9a0f-4a04-869a-c63eac9a84f6-kube-api-access-xc4fz\") pod \"openshift-controller-manager-operator-756b6f6bc6-48hc6\" (UID: \"30057c46-9a0f-4a04-869a-c63eac9a84f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602208 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-oauth-config\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602237 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-config\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602272 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1db40f1b-b714-4920-8b27-f350b3dd2978-etcd-client\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602480 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9e844-630f-46d5-a08c-94a6d0b56404-config\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602507 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602549 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db40f1b-b714-4920-8b27-f350b3dd2978-serving-cert\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602604 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grtc8\" (UniqueName: \"kubernetes.io/projected/dbbce4f0-e239-41ed-98b6-b5b84a303b34-kube-api-access-grtc8\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602683 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ea4294a-f91c-4d13-8372-b1e8b7a73831-audit-dir\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602720 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrwsw\" (UniqueName: \"kubernetes.io/projected/1db40f1b-b714-4920-8b27-f350b3dd2978-kube-api-access-qrwsw\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602739 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ea4294a-f91c-4d13-8372-b1e8b7a73831-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602759 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt4vx\" (UniqueName: \"kubernetes.io/projected/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-kube-api-access-nt4vx\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602779 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-config\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602835 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602937 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.602978 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab54c730-c74c-4988-ac00-e926c9907435-serving-cert\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603013 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603057 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-client-ca\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603136 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-etcd-serving-ca\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603206 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-config\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603416 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjtc\" (UniqueName: \"kubernetes.io/projected/7e5e40d7-f505-4b8d-ac40-9677a7ebe781-kube-api-access-mgjtc\") pod \"openshift-config-operator-7777fb866f-7pl8w\" (UID: \"7e5e40d7-f505-4b8d-ac40-9677a7ebe781\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603446 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-config\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603486 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgpgw\" (UniqueName: \"kubernetes.io/projected/4ea4294a-f91c-4d13-8372-b1e8b7a73831-kube-api-access-cgpgw\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603630 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-dir\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603651 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603678 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b92949-b7ec-4d5c-a27e-259972a4a4dd-serving-cert\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603698 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-image-import-ca\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603729 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab54c730-c74c-4988-ac00-e926c9907435-etcd-client\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603829 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603893 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-policies\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603914 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/651f9cbc-e905-462d-b42f-84d2a642169d-metrics-tls\") pod \"dns-operator-744455d44c-sh8rh\" (UID: \"651f9cbc-e905-462d-b42f-84d2a642169d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.603940 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.604037 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44cwj\" (UniqueName: \"kubernetes.io/projected/1bb89394-7073-4408-a891-f4a6eb44eaa7-kube-api-access-44cwj\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.604063 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1db40f1b-b714-4920-8b27-f350b3dd2978-audit-dir\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.604086 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7e5e40d7-f505-4b8d-ac40-9677a7ebe781-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7pl8w\" (UID: \"7e5e40d7-f505-4b8d-ac40-9677a7ebe781\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.604118 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-config\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.604146 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42vjt\" (UniqueName: \"kubernetes.io/projected/7f69bf6d-80ca-4042-8be3-cf335b4a13f4-kube-api-access-42vjt\") pod \"openshift-apiserver-operator-796bbdcf4f-c7mg2\" (UID: \"7f69bf6d-80ca-4042-8be3-cf335b4a13f4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.604263 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4ea4294a-f91c-4d13-8372-b1e8b7a73831-etcd-client\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.604444 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.604581 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f69bf6d-80ca-4042-8be3-cf335b4a13f4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c7mg2\" (UID: \"7f69bf6d-80ca-4042-8be3-cf335b4a13f4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.604765 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9e844-630f-46d5-a08c-94a6d0b56404-serving-cert\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.605471 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17c9e844-630f-46d5-a08c-94a6d0b56404-trusted-ca\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.605554 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt4wj\" (UniqueName: \"kubernetes.io/projected/0b3633c0-54b9-486c-a14b-99b6e5c04765-kube-api-access-gt4wj\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.605640 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbce4f0-e239-41ed-98b6-b5b84a303b34-serving-cert\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.605792 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab54c730-c74c-4988-ac00-e926c9907435-etcd-service-ca\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.605948 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b92949-b7ec-4d5c-a27e-259972a4a4dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.605993 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30057c46-9a0f-4a04-869a-c63eac9a84f6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-48hc6\" (UID: \"30057c46-9a0f-4a04-869a-c63eac9a84f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.606144 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4ea4294a-f91c-4d13-8372-b1e8b7a73831-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.606222 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s25lc\" (UniqueName: \"kubernetes.io/projected/79b92949-b7ec-4d5c-a27e-259972a4a4dd-kube-api-access-s25lc\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.606231 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.606305 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-service-ca\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.606400 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.606536 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-client-ca\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.606984 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1db40f1b-b714-4920-8b27-f350b3dd2978-audit-dir\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.607287 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-etcd-serving-ca\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.607858 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db40f1b-b714-4920-8b27-f350b3dd2978-serving-cert\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.608183 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-config\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.621949 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.606484 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1bb89394-7073-4408-a891-f4a6eb44eaa7-images\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.622896 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-w8g2c"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.623881 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.609815 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1db40f1b-b714-4920-8b27-f350b3dd2978-etcd-client\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.624448 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.624684 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbce4f0-e239-41ed-98b6-b5b84a303b34-serving-cert\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.625002 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b92949-b7ec-4d5c-a27e-259972a4a4dd-config\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.625058 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ab54c730-c74c-4988-ac00-e926c9907435-etcd-ca\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.625314 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9tg\" (UniqueName: \"kubernetes.io/projected/7843f81a-d6bd-463f-b5b7-454e3f943ed8-kube-api-access-zt9tg\") pod \"downloads-7954f5f757-2hscq\" (UID: \"7843f81a-d6bd-463f-b5b7-454e3f943ed8\") " pod="openshift-console/downloads-7954f5f757-2hscq" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.625481 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjm49\" (UniqueName: \"kubernetes.io/projected/ab54c730-c74c-4988-ac00-e926c9907435-kube-api-access-rjm49\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.625664 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zflv9\" (UniqueName: \"kubernetes.io/projected/ebb647d3-e0ce-4122-8582-d1a2d4d2c594-kube-api-access-zflv9\") pod \"cluster-samples-operator-665b6dd947-2fs85\" (UID: \"ebb647d3-e0ce-4122-8582-d1a2d4d2c594\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.625715 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bb89394-7073-4408-a891-f4a6eb44eaa7-config\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.625866 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f69bf6d-80ca-4042-8be3-cf335b4a13f4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c7mg2\" (UID: \"7f69bf6d-80ca-4042-8be3-cf335b4a13f4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.625960 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.625984 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626022 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-serving-cert\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626038 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-machine-approver-tls\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626087 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1db40f1b-b714-4920-8b27-f350b3dd2978-node-pullsecrets\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626109 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4ea4294a-f91c-4d13-8372-b1e8b7a73831-encryption-config\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626137 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgpfk\" (UniqueName: \"kubernetes.io/projected/17c9e844-630f-46d5-a08c-94a6d0b56404-kube-api-access-zgpfk\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626173 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b92949-b7ec-4d5c-a27e-259972a4a4dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626222 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626242 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gkc\" (UniqueName: \"kubernetes.io/projected/651f9cbc-e905-462d-b42f-84d2a642169d-kube-api-access-n4gkc\") pod \"dns-operator-744455d44c-sh8rh\" (UID: \"651f9cbc-e905-462d-b42f-84d2a642169d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626299 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e5e40d7-f505-4b8d-ac40-9677a7ebe781-serving-cert\") pod \"openshift-config-operator-7777fb866f-7pl8w\" (UID: \"7e5e40d7-f505-4b8d-ac40-9677a7ebe781\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626338 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30057c46-9a0f-4a04-869a-c63eac9a84f6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-48hc6\" (UID: \"30057c46-9a0f-4a04-869a-c63eac9a84f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626359 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea4294a-f91c-4d13-8372-b1e8b7a73831-serving-cert\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626379 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-auth-proxy-config\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626418 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ea4294a-f91c-4d13-8372-b1e8b7a73831-audit-policies\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626510 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-image-import-ca\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626579 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626595 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1db40f1b-b714-4920-8b27-f350b3dd2978-node-pullsecrets\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.627013 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f69bf6d-80ca-4042-8be3-cf335b4a13f4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c7mg2\" (UID: \"7f69bf6d-80ca-4042-8be3-cf335b4a13f4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.626999 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-trusted-ca-bundle\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.627706 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.627797 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63954f23-8000-4ada-8d5d-67297b7c26f6-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.627832 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.627909 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1db40f1b-b714-4920-8b27-f350b3dd2978-encryption-config\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.627968 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-client-ca\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.628119 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-audit\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.628197 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-oauth-serving-cert\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.628228 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gw6g\" (UniqueName: \"kubernetes.io/projected/acb5ada5-3567-4f1c-9130-1e78f3e88975-kube-api-access-2gw6g\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.628274 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.628311 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.628334 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-config\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.628392 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab54c730-c74c-4988-ac00-e926c9907435-config\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.628417 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1bb89394-7073-4408-a891-f4a6eb44eaa7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.628720 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-audit\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.629561 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1db40f1b-b714-4920-8b27-f350b3dd2978-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.630414 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1db40f1b-b714-4920-8b27-f350b3dd2978-encryption-config\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.631567 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f69bf6d-80ca-4042-8be3-cf335b4a13f4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c7mg2\" (UID: \"7f69bf6d-80ca-4042-8be3-cf335b4a13f4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.632366 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.632989 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.634748 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.641490 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kj9s2"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.644988 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.646890 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535718-4rxms"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.647389 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.647513 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.647620 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535718-4rxms" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.647778 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9nkmk"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.648656 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.649735 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.650471 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.651121 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.651712 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.652559 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-q68m5"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.653872 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.653886 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.654684 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.655791 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q2jtw"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.656376 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.657484 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.657923 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.658839 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.659636 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.660270 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.661583 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.662650 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.663793 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-c7wmc"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.664990 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hscq"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.666145 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sh8rh"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.666557 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.667572 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dgm55"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.668923 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-298fw"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.671419 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.674101 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.675330 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.675414 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.676288 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.677282 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.678670 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535718-4rxms"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.680123 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8h9hc"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.682678 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vz6hr"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.683270 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.685399 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.685695 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.687592 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.689321 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8n798"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.691547 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.692602 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.694740 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.695918 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wzqgc"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.696986 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.698013 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.699072 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7g2dk"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.700135 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9nkmk"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.701266 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8h9hc"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.702326 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q2jtw"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.703509 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.704523 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.705587 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-q68m5"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.706571 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.706744 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.707575 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qd595"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.708429 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qd595" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.708640 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qd595"] Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729468 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4ea4294a-f91c-4d13-8372-b1e8b7a73831-etcd-client\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729505 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9e844-630f-46d5-a08c-94a6d0b56404-serving-cert\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729530 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17c9e844-630f-46d5-a08c-94a6d0b56404-trusted-ca\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729546 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt4wj\" (UniqueName: \"kubernetes.io/projected/0b3633c0-54b9-486c-a14b-99b6e5c04765-kube-api-access-gt4wj\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729573 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b92949-b7ec-4d5c-a27e-259972a4a4dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729593 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaf6469-19dc-47f9-a762-f02109d88907-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-54n7v\" (UID: \"ffaf6469-19dc-47f9-a762-f02109d88907\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729610 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab54c730-c74c-4988-ac00-e926c9907435-etcd-service-ca\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729627 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30057c46-9a0f-4a04-869a-c63eac9a84f6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-48hc6\" (UID: \"30057c46-9a0f-4a04-869a-c63eac9a84f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729643 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4ea4294a-f91c-4d13-8372-b1e8b7a73831-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729658 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s25lc\" (UniqueName: \"kubernetes.io/projected/79b92949-b7ec-4d5c-a27e-259972a4a4dd-kube-api-access-s25lc\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729675 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-service-ca\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729690 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729711 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1bb89394-7073-4408-a891-f4a6eb44eaa7-images\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729726 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b92949-b7ec-4d5c-a27e-259972a4a4dd-config\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729742 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9tg\" (UniqueName: \"kubernetes.io/projected/7843f81a-d6bd-463f-b5b7-454e3f943ed8-kube-api-access-zt9tg\") pod \"downloads-7954f5f757-2hscq\" (UID: \"7843f81a-d6bd-463f-b5b7-454e3f943ed8\") " pod="openshift-console/downloads-7954f5f757-2hscq" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729760 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9321ff73-5107-4139-ad6f-622b13de5cd1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl7fn\" (UID: \"9321ff73-5107-4139-ad6f-622b13de5cd1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729783 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ab54c730-c74c-4988-ac00-e926c9907435-etcd-ca\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729801 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd00464d-5f76-4abd-8c83-ce0821d00dfa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-svj47\" (UID: \"bd00464d-5f76-4abd-8c83-ce0821d00dfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729818 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjm49\" (UniqueName: \"kubernetes.io/projected/ab54c730-c74c-4988-ac00-e926c9907435-kube-api-access-rjm49\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729840 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zflv9\" (UniqueName: \"kubernetes.io/projected/ebb647d3-e0ce-4122-8582-d1a2d4d2c594-kube-api-access-zflv9\") pod \"cluster-samples-operator-665b6dd947-2fs85\" (UID: \"ebb647d3-e0ce-4122-8582-d1a2d4d2c594\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729855 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bb89394-7073-4408-a891-f4a6eb44eaa7-config\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729874 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaf6469-19dc-47f9-a762-f02109d88907-config\") pod \"kube-apiserver-operator-766d6c64bb-54n7v\" (UID: \"ffaf6469-19dc-47f9-a762-f02109d88907\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729891 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd00464d-5f76-4abd-8c83-ce0821d00dfa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-svj47\" (UID: \"bd00464d-5f76-4abd-8c83-ce0821d00dfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729931 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729947 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-metrics-certs\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729963 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxm22\" (UniqueName: \"kubernetes.io/projected/2678a257-f356-4cad-9ad5-22c264a8810f-kube-api-access-nxm22\") pod \"dns-default-wzqgc\" (UID: \"2678a257-f356-4cad-9ad5-22c264a8810f\") " pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729981 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-serving-cert\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.729998 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-machine-approver-tls\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730014 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730034 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4ea4294a-f91c-4d13-8372-b1e8b7a73831-encryption-config\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730051 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgpfk\" (UniqueName: \"kubernetes.io/projected/17c9e844-630f-46d5-a08c-94a6d0b56404-kube-api-access-zgpfk\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730067 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b92949-b7ec-4d5c-a27e-259972a4a4dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730084 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26105f09-3245-4c15-ba60-54f31690d926-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kn2dp\" (UID: \"26105f09-3245-4c15-ba60-54f31690d926\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730100 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e5e40d7-f505-4b8d-ac40-9677a7ebe781-serving-cert\") pod \"openshift-config-operator-7777fb866f-7pl8w\" (UID: \"7e5e40d7-f505-4b8d-ac40-9677a7ebe781\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730115 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30057c46-9a0f-4a04-869a-c63eac9a84f6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-48hc6\" (UID: \"30057c46-9a0f-4a04-869a-c63eac9a84f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730131 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730148 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gkc\" (UniqueName: \"kubernetes.io/projected/651f9cbc-e905-462d-b42f-84d2a642169d-kube-api-access-n4gkc\") pod \"dns-operator-744455d44c-sh8rh\" (UID: \"651f9cbc-e905-462d-b42f-84d2a642169d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730177 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea4294a-f91c-4d13-8372-b1e8b7a73831-serving-cert\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730193 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-default-certificate\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730212 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ea4294a-f91c-4d13-8372-b1e8b7a73831-audit-policies\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730227 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-auth-proxy-config\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730243 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbbpn\" (UniqueName: \"kubernetes.io/projected/9321ff73-5107-4139-ad6f-622b13de5cd1-kube-api-access-vbbpn\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl7fn\" (UID: \"9321ff73-5107-4139-ad6f-622b13de5cd1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730259 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-trusted-ca-bundle\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730301 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730317 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm6q2\" (UniqueName: \"kubernetes.io/projected/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-kube-api-access-bm6q2\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730335 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-client-ca\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730351 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63954f23-8000-4ada-8d5d-67297b7c26f6-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730370 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730384 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-oauth-serving-cert\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730399 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gw6g\" (UniqueName: \"kubernetes.io/projected/acb5ada5-3567-4f1c-9130-1e78f3e88975-kube-api-access-2gw6g\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730420 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-stats-auth\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730439 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-config\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730461 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730487 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab54c730-c74c-4988-ac00-e926c9907435-config\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730503 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1bb89394-7073-4408-a891-f4a6eb44eaa7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730519 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730533 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2678a257-f356-4cad-9ad5-22c264a8810f-metrics-tls\") pod \"dns-default-wzqgc\" (UID: \"2678a257-f356-4cad-9ad5-22c264a8810f\") " pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730550 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd00464d-5f76-4abd-8c83-ce0821d00dfa-config\") pod \"kube-controller-manager-operator-78b949d7b-svj47\" (UID: \"bd00464d-5f76-4abd-8c83-ce0821d00dfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730566 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebb647d3-e0ce-4122-8582-d1a2d4d2c594-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2fs85\" (UID: \"ebb647d3-e0ce-4122-8582-d1a2d4d2c594\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730583 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mktx\" (UniqueName: \"kubernetes.io/projected/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-kube-api-access-7mktx\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730600 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc4fz\" (UniqueName: \"kubernetes.io/projected/30057c46-9a0f-4a04-869a-c63eac9a84f6-kube-api-access-xc4fz\") pod \"openshift-controller-manager-operator-756b6f6bc6-48hc6\" (UID: \"30057c46-9a0f-4a04-869a-c63eac9a84f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730616 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-oauth-config\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730631 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-config\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730645 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65br6\" (UniqueName: \"kubernetes.io/projected/63954f23-8000-4ada-8d5d-67297b7c26f6-kube-api-access-65br6\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730662 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9e844-630f-46d5-a08c-94a6d0b56404-config\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730678 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730697 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ea4294a-f91c-4d13-8372-b1e8b7a73831-audit-dir\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730722 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26105f09-3245-4c15-ba60-54f31690d926-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kn2dp\" (UID: \"26105f09-3245-4c15-ba60-54f31690d926\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730739 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt4vx\" (UniqueName: \"kubernetes.io/projected/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-kube-api-access-nt4vx\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730763 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ea4294a-f91c-4d13-8372-b1e8b7a73831-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730779 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab54c730-c74c-4988-ac00-e926c9907435-serving-cert\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730796 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730811 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730830 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-service-ca-bundle\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730847 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjtc\" (UniqueName: \"kubernetes.io/projected/7e5e40d7-f505-4b8d-ac40-9677a7ebe781-kube-api-access-mgjtc\") pod \"openshift-config-operator-7777fb866f-7pl8w\" (UID: \"7e5e40d7-f505-4b8d-ac40-9677a7ebe781\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730866 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-config\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730882 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2678a257-f356-4cad-9ad5-22c264a8810f-config-volume\") pod \"dns-default-wzqgc\" (UID: \"2678a257-f356-4cad-9ad5-22c264a8810f\") " pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730898 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730915 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b92949-b7ec-4d5c-a27e-259972a4a4dd-serving-cert\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730930 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab54c730-c74c-4988-ac00-e926c9907435-etcd-client\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730946 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgpgw\" (UniqueName: \"kubernetes.io/projected/4ea4294a-f91c-4d13-8372-b1e8b7a73831-kube-api-access-cgpgw\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730960 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-dir\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730975 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffaf6469-19dc-47f9-a762-f02109d88907-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-54n7v\" (UID: \"ffaf6469-19dc-47f9-a762-f02109d88907\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.730991 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/651f9cbc-e905-462d-b42f-84d2a642169d-metrics-tls\") pod \"dns-operator-744455d44c-sh8rh\" (UID: \"651f9cbc-e905-462d-b42f-84d2a642169d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.731007 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26105f09-3245-4c15-ba60-54f31690d926-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kn2dp\" (UID: \"26105f09-3245-4c15-ba60-54f31690d926\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.731027 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.731042 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-policies\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.731058 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqlln\" (UniqueName: \"kubernetes.io/projected/09c3c040-a9ac-441c-a1ce-b7d67233579a-kube-api-access-qqlln\") pod \"migrator-59844c95c7-w89rp\" (UID: \"09c3c040-a9ac-441c-a1ce-b7d67233579a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.731074 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7e5e40d7-f505-4b8d-ac40-9677a7ebe781-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7pl8w\" (UID: \"7e5e40d7-f505-4b8d-ac40-9677a7ebe781\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.731089 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.731105 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44cwj\" (UniqueName: \"kubernetes.io/projected/1bb89394-7073-4408-a891-f4a6eb44eaa7-kube-api-access-44cwj\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.732690 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-client-ca\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.732827 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-config\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.732772 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-oauth-serving-cert\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.732995 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.733563 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab54c730-c74c-4988-ac00-e926c9907435-config\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.733680 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-dir\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.733705 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-policies\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.734044 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.734053 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ea4294a-f91c-4d13-8372-b1e8b7a73831-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.734591 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7e5e40d7-f505-4b8d-ac40-9677a7ebe781-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7pl8w\" (UID: \"7e5e40d7-f505-4b8d-ac40-9677a7ebe781\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.735582 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9e844-630f-46d5-a08c-94a6d0b56404-config\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.735605 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1bb89394-7073-4408-a891-f4a6eb44eaa7-images\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.736218 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.736327 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-config\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.736355 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9e844-630f-46d5-a08c-94a6d0b56404-serving-cert\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.736404 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17c9e844-630f-46d5-a08c-94a6d0b56404-trusted-ca\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.736870 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4ea4294a-f91c-4d13-8372-b1e8b7a73831-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.737016 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30057c46-9a0f-4a04-869a-c63eac9a84f6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-48hc6\" (UID: \"30057c46-9a0f-4a04-869a-c63eac9a84f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.737262 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1bb89394-7073-4408-a891-f4a6eb44eaa7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.737379 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-oauth-config\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.737582 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-config\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.737755 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-service-ca\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.738148 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.738741 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ea4294a-f91c-4d13-8372-b1e8b7a73831-audit-dir\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.739307 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.739378 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b92949-b7ec-4d5c-a27e-259972a4a4dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.740086 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4ea4294a-f91c-4d13-8372-b1e8b7a73831-etcd-client\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.740221 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b92949-b7ec-4d5c-a27e-259972a4a4dd-serving-cert\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.740687 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63954f23-8000-4ada-8d5d-67297b7c26f6-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.740882 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-serving-cert\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.741472 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab54c730-c74c-4988-ac00-e926c9907435-serving-cert\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.741482 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.741685 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.741905 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.741928 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea4294a-f91c-4d13-8372-b1e8b7a73831-serving-cert\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.742515 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b92949-b7ec-4d5c-a27e-259972a4a4dd-config\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.742629 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.743092 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebb647d3-e0ce-4122-8582-d1a2d4d2c594-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2fs85\" (UID: \"ebb647d3-e0ce-4122-8582-d1a2d4d2c594\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.743301 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.743442 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/651f9cbc-e905-462d-b42f-84d2a642169d-metrics-tls\") pod \"dns-operator-744455d44c-sh8rh\" (UID: \"651f9cbc-e905-462d-b42f-84d2a642169d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.745536 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab54c730-c74c-4988-ac00-e926c9907435-etcd-client\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.747060 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.753512 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ea4294a-f91c-4d13-8372-b1e8b7a73831-audit-policies\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.754032 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-auth-proxy-config\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.755099 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-trusted-ca-bundle\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.755672 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b92949-b7ec-4d5c-a27e-259972a4a4dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.756364 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.756717 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.756865 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ab54c730-c74c-4988-ac00-e926c9907435-etcd-ca\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.756992 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab54c730-c74c-4988-ac00-e926c9907435-etcd-service-ca\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.757320 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bb89394-7073-4408-a891-f4a6eb44eaa7-config\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.757374 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.758574 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e5e40d7-f505-4b8d-ac40-9677a7ebe781-serving-cert\") pod \"openshift-config-operator-7777fb866f-7pl8w\" (UID: \"7e5e40d7-f505-4b8d-ac40-9677a7ebe781\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.759581 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30057c46-9a0f-4a04-869a-c63eac9a84f6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-48hc6\" (UID: \"30057c46-9a0f-4a04-869a-c63eac9a84f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.759708 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.762454 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4ea4294a-f91c-4d13-8372-b1e8b7a73831-encryption-config\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.767481 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.806655 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.823429 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-machine-approver-tls\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.827526 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831668 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26105f09-3245-4c15-ba60-54f31690d926-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kn2dp\" (UID: \"26105f09-3245-4c15-ba60-54f31690d926\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831708 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-default-certificate\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831727 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbbpn\" (UniqueName: \"kubernetes.io/projected/9321ff73-5107-4139-ad6f-622b13de5cd1-kube-api-access-vbbpn\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl7fn\" (UID: \"9321ff73-5107-4139-ad6f-622b13de5cd1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831749 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm6q2\" (UniqueName: \"kubernetes.io/projected/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-kube-api-access-bm6q2\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831778 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-stats-auth\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831806 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2678a257-f356-4cad-9ad5-22c264a8810f-metrics-tls\") pod \"dns-default-wzqgc\" (UID: \"2678a257-f356-4cad-9ad5-22c264a8810f\") " pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831824 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd00464d-5f76-4abd-8c83-ce0821d00dfa-config\") pod \"kube-controller-manager-operator-78b949d7b-svj47\" (UID: \"bd00464d-5f76-4abd-8c83-ce0821d00dfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831892 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26105f09-3245-4c15-ba60-54f31690d926-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kn2dp\" (UID: \"26105f09-3245-4c15-ba60-54f31690d926\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831919 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-service-ca-bundle\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831944 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2678a257-f356-4cad-9ad5-22c264a8810f-config-volume\") pod \"dns-default-wzqgc\" (UID: \"2678a257-f356-4cad-9ad5-22c264a8810f\") " pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831966 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffaf6469-19dc-47f9-a762-f02109d88907-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-54n7v\" (UID: \"ffaf6469-19dc-47f9-a762-f02109d88907\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.831982 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26105f09-3245-4c15-ba60-54f31690d926-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kn2dp\" (UID: \"26105f09-3245-4c15-ba60-54f31690d926\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.832004 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqlln\" (UniqueName: \"kubernetes.io/projected/09c3c040-a9ac-441c-a1ce-b7d67233579a-kube-api-access-qqlln\") pod \"migrator-59844c95c7-w89rp\" (UID: \"09c3c040-a9ac-441c-a1ce-b7d67233579a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.832037 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaf6469-19dc-47f9-a762-f02109d88907-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-54n7v\" (UID: \"ffaf6469-19dc-47f9-a762-f02109d88907\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.832078 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9321ff73-5107-4139-ad6f-622b13de5cd1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl7fn\" (UID: \"9321ff73-5107-4139-ad6f-622b13de5cd1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.832102 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd00464d-5f76-4abd-8c83-ce0821d00dfa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-svj47\" (UID: \"bd00464d-5f76-4abd-8c83-ce0821d00dfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.832130 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd00464d-5f76-4abd-8c83-ce0821d00dfa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-svj47\" (UID: \"bd00464d-5f76-4abd-8c83-ce0821d00dfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.832150 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaf6469-19dc-47f9-a762-f02109d88907-config\") pod \"kube-apiserver-operator-766d6c64bb-54n7v\" (UID: \"ffaf6469-19dc-47f9-a762-f02109d88907\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.832200 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-metrics-certs\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.832235 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxm22\" (UniqueName: \"kubernetes.io/projected/2678a257-f356-4cad-9ad5-22c264a8810f-kube-api-access-nxm22\") pod \"dns-default-wzqgc\" (UID: \"2678a257-f356-4cad-9ad5-22c264a8810f\") " pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.847254 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.875357 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.887457 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.907519 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.927785 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.948296 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.967727 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.986618 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 21:59:18 crc kubenswrapper[4910]: I0226 21:59:18.997594 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffaf6469-19dc-47f9-a762-f02109d88907-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-54n7v\" (UID: \"ffaf6469-19dc-47f9-a762-f02109d88907\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.007200 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.012961 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffaf6469-19dc-47f9-a762-f02109d88907-config\") pod \"kube-apiserver-operator-766d6c64bb-54n7v\" (UID: \"ffaf6469-19dc-47f9-a762-f02109d88907\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.039642 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.045928 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9321ff73-5107-4139-ad6f-622b13de5cd1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl7fn\" (UID: \"9321ff73-5107-4139-ad6f-622b13de5cd1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.047797 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.068058 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.087404 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.107355 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.127235 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.147773 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.166875 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.187462 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.206959 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.227707 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.236835 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2678a257-f356-4cad-9ad5-22c264a8810f-metrics-tls\") pod \"dns-default-wzqgc\" (UID: \"2678a257-f356-4cad-9ad5-22c264a8810f\") " pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.247504 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.267815 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.273660 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2678a257-f356-4cad-9ad5-22c264a8810f-config-volume\") pod \"dns-default-wzqgc\" (UID: \"2678a257-f356-4cad-9ad5-22c264a8810f\") " pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.287623 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.307675 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.328235 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.337461 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd00464d-5f76-4abd-8c83-ce0821d00dfa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-svj47\" (UID: \"bd00464d-5f76-4abd-8c83-ce0821d00dfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.347104 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.367870 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.388328 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.393234 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd00464d-5f76-4abd-8c83-ce0821d00dfa-config\") pod \"kube-controller-manager-operator-78b949d7b-svj47\" (UID: \"bd00464d-5f76-4abd-8c83-ce0821d00dfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.433793 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grtc8\" (UniqueName: \"kubernetes.io/projected/dbbce4f0-e239-41ed-98b6-b5b84a303b34-kube-api-access-grtc8\") pod \"controller-manager-879f6c89f-bw8qw\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.435958 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.453015 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrwsw\" (UniqueName: \"kubernetes.io/projected/1db40f1b-b714-4920-8b27-f350b3dd2978-kube-api-access-qrwsw\") pod \"apiserver-76f77b778f-kz5fx\" (UID: \"1db40f1b-b714-4920-8b27-f350b3dd2978\") " pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.469513 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.475214 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vjt\" (UniqueName: \"kubernetes.io/projected/7f69bf6d-80ca-4042-8be3-cf335b4a13f4-kube-api-access-42vjt\") pod \"openshift-apiserver-operator-796bbdcf4f-c7mg2\" (UID: \"7f69bf6d-80ca-4042-8be3-cf335b4a13f4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.488939 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.508269 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.519428 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26105f09-3245-4c15-ba60-54f31690d926-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kn2dp\" (UID: \"26105f09-3245-4c15-ba60-54f31690d926\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.529358 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.534210 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26105f09-3245-4c15-ba60-54f31690d926-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kn2dp\" (UID: \"26105f09-3245-4c15-ba60-54f31690d926\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.548028 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.568854 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.587233 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.597249 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-default-certificate\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.608074 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.617943 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-stats-auth\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.626414 4910 request.go:700] Waited for 1.002177248s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.629496 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.638724 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.648029 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.676093 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-metrics-certs\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.678518 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.683316 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bw8qw"] Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.684267 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-service-ca-bundle\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.691468 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" Feb 26 21:59:19 crc kubenswrapper[4910]: W0226 21:59:19.698660 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbbce4f0_e239_41ed_98b6_b5b84a303b34.slice/crio-0cb0209380424b9c8394495836b187579edfa85d74d753e293ad27720b70bde2 WatchSource:0}: Error finding container 0cb0209380424b9c8394495836b187579edfa85d74d753e293ad27720b70bde2: Status 404 returned error can't find the container with id 0cb0209380424b9c8394495836b187579edfa85d74d753e293ad27720b70bde2 Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.707759 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.727858 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.748703 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.768792 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.787730 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.808213 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.827384 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.847061 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.868281 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kz5fx"] Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.869711 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: W0226 21:59:19.877860 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1db40f1b_b714_4920_8b27_f350b3dd2978.slice/crio-f587349e7a48e73606f3c8be1582b6236098c7167c9e96986fd7f892179eca71 WatchSource:0}: Error finding container f587349e7a48e73606f3c8be1582b6236098c7167c9e96986fd7f892179eca71: Status 404 returned error can't find the container with id f587349e7a48e73606f3c8be1582b6236098c7167c9e96986fd7f892179eca71 Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.887940 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.889060 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2"] Feb 26 21:59:19 crc kubenswrapper[4910]: W0226 21:59:19.895575 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f69bf6d_80ca_4042_8be3_cf335b4a13f4.slice/crio-171ea86b2b14c09876961bf3d165cc63d8faf276c467c6d9f27bf7f93a70fc32 WatchSource:0}: Error finding container 171ea86b2b14c09876961bf3d165cc63d8faf276c467c6d9f27bf7f93a70fc32: Status 404 returned error can't find the container with id 171ea86b2b14c09876961bf3d165cc63d8faf276c467c6d9f27bf7f93a70fc32 Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.906895 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.927557 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.946603 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.967435 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 21:59:19 crc kubenswrapper[4910]: I0226 21:59:19.987092 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.006627 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.026766 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.047486 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.068949 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.086665 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.107149 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.127183 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.148079 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.166820 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.187792 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.206631 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.236482 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.247509 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.268812 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.287839 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.307677 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.327234 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.347621 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.367068 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.387907 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.408338 4910 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.428246 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.449048 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.467601 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.488049 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.508305 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.529141 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.532827 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" event={"ID":"7f69bf6d-80ca-4042-8be3-cf335b4a13f4","Type":"ContainerStarted","Data":"d70195369b671ff92ca13b2e94979bc7642d31f8def00a8ff738f6baa86409cd"} Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.532895 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" event={"ID":"7f69bf6d-80ca-4042-8be3-cf335b4a13f4","Type":"ContainerStarted","Data":"171ea86b2b14c09876961bf3d165cc63d8faf276c467c6d9f27bf7f93a70fc32"} Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.535717 4910 generic.go:334] "Generic (PLEG): container finished" podID="1db40f1b-b714-4920-8b27-f350b3dd2978" containerID="0b3c185510be991ad61cc64a64ecd4b0587f0538d60b7f103fd284ade066aa81" exitCode=0 Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.535844 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" event={"ID":"1db40f1b-b714-4920-8b27-f350b3dd2978","Type":"ContainerDied","Data":"0b3c185510be991ad61cc64a64ecd4b0587f0538d60b7f103fd284ade066aa81"} Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.535893 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" event={"ID":"1db40f1b-b714-4920-8b27-f350b3dd2978","Type":"ContainerStarted","Data":"f587349e7a48e73606f3c8be1582b6236098c7167c9e96986fd7f892179eca71"} Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.539481 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" event={"ID":"dbbce4f0-e239-41ed-98b6-b5b84a303b34","Type":"ContainerStarted","Data":"6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef"} Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.539555 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" event={"ID":"dbbce4f0-e239-41ed-98b6-b5b84a303b34","Type":"ContainerStarted","Data":"0cb0209380424b9c8394495836b187579edfa85d74d753e293ad27720b70bde2"} Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.540095 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.543511 4910 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bw8qw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.543587 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" podUID="dbbce4f0-e239-41ed-98b6-b5b84a303b34" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.547719 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.598692 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44cwj\" (UniqueName: \"kubernetes.io/projected/1bb89394-7073-4408-a891-f4a6eb44eaa7-kube-api-access-44cwj\") pod \"machine-api-operator-5694c8668f-nxzt6\" (UID: \"1bb89394-7073-4408-a891-f4a6eb44eaa7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.605885 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjtc\" (UniqueName: \"kubernetes.io/projected/7e5e40d7-f505-4b8d-ac40-9677a7ebe781-kube-api-access-mgjtc\") pod \"openshift-config-operator-7777fb866f-7pl8w\" (UID: \"7e5e40d7-f505-4b8d-ac40-9677a7ebe781\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.625916 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gw6g\" (UniqueName: \"kubernetes.io/projected/acb5ada5-3567-4f1c-9130-1e78f3e88975-kube-api-access-2gw6g\") pod \"console-f9d7485db-kj9s2\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.645548 4910 request.go:700] Waited for 1.911295874s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.647210 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt4vx\" (UniqueName: \"kubernetes.io/projected/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-kube-api-access-nt4vx\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.667870 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb80ebfa-1dd8-40e6-9d5e-27311836ccfb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vhr7j\" (UID: \"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.688233 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgpgw\" (UniqueName: \"kubernetes.io/projected/4ea4294a-f91c-4d13-8372-b1e8b7a73831-kube-api-access-cgpgw\") pod \"apiserver-7bbb656c7d-2jnr5\" (UID: \"4ea4294a-f91c-4d13-8372-b1e8b7a73831\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.695931 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.704135 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65br6\" (UniqueName: \"kubernetes.io/projected/63954f23-8000-4ada-8d5d-67297b7c26f6-kube-api-access-65br6\") pod \"route-controller-manager-6576b87f9c-zrpl2\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.720882 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s25lc\" (UniqueName: \"kubernetes.io/projected/79b92949-b7ec-4d5c-a27e-259972a4a4dd-kube-api-access-s25lc\") pod \"authentication-operator-69f744f599-vdp5g\" (UID: \"79b92949-b7ec-4d5c-a27e-259972a4a4dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.746614 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt4wj\" (UniqueName: \"kubernetes.io/projected/0b3633c0-54b9-486c-a14b-99b6e5c04765-kube-api-access-gt4wj\") pod \"oauth-openshift-558db77b4-dgm55\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.749385 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.756594 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.763185 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.770930 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gkc\" (UniqueName: \"kubernetes.io/projected/651f9cbc-e905-462d-b42f-84d2a642169d-kube-api-access-n4gkc\") pod \"dns-operator-744455d44c-sh8rh\" (UID: \"651f9cbc-e905-462d-b42f-84d2a642169d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.796145 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.802874 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc4fz\" (UniqueName: \"kubernetes.io/projected/30057c46-9a0f-4a04-869a-c63eac9a84f6-kube-api-access-xc4fz\") pod \"openshift-controller-manager-operator-756b6f6bc6-48hc6\" (UID: \"30057c46-9a0f-4a04-869a-c63eac9a84f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.805048 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.810013 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mktx\" (UniqueName: \"kubernetes.io/projected/a46ac2a0-b244-4742-ade1-cd57ce2e87d5-kube-api-access-7mktx\") pod \"machine-approver-56656f9798-6222h\" (UID: \"a46ac2a0-b244-4742-ade1-cd57ce2e87d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.817271 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.826381 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9tg\" (UniqueName: \"kubernetes.io/projected/7843f81a-d6bd-463f-b5b7-454e3f943ed8-kube-api-access-zt9tg\") pod \"downloads-7954f5f757-2hscq\" (UID: \"7843f81a-d6bd-463f-b5b7-454e3f943ed8\") " pod="openshift-console/downloads-7954f5f757-2hscq" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.841677 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjm49\" (UniqueName: \"kubernetes.io/projected/ab54c730-c74c-4988-ac00-e926c9907435-kube-api-access-rjm49\") pod \"etcd-operator-b45778765-7g2dk\" (UID: \"ab54c730-c74c-4988-ac00-e926c9907435\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.866555 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgpfk\" (UniqueName: \"kubernetes.io/projected/17c9e844-630f-46d5-a08c-94a6d0b56404-kube-api-access-zgpfk\") pod \"console-operator-58897d9998-c7wmc\" (UID: \"17c9e844-630f-46d5-a08c-94a6d0b56404\") " pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.892863 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zflv9\" (UniqueName: \"kubernetes.io/projected/ebb647d3-e0ce-4122-8582-d1a2d4d2c594-kube-api-access-zflv9\") pod \"cluster-samples-operator-665b6dd947-2fs85\" (UID: \"ebb647d3-e0ce-4122-8582-d1a2d4d2c594\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.922411 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.924807 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbbpn\" (UniqueName: \"kubernetes.io/projected/9321ff73-5107-4139-ad6f-622b13de5cd1-kube-api-access-vbbpn\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl7fn\" (UID: \"9321ff73-5107-4139-ad6f-622b13de5cd1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.961999 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.976627 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26105f09-3245-4c15-ba60-54f31690d926-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kn2dp\" (UID: \"26105f09-3245-4c15-ba60-54f31690d926\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.976869 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:20 crc kubenswrapper[4910]: I0226 21:59:20.977039 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm6q2\" (UniqueName: \"kubernetes.io/projected/ee0c3a2c-59c9-4f63-93c9-94c498a8d065-kube-api-access-bm6q2\") pod \"router-default-5444994796-w8g2c\" (UID: \"ee0c3a2c-59c9-4f63-93c9-94c498a8d065\") " pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.002003 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.009651 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nxzt6"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.016356 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffaf6469-19dc-47f9-a762-f02109d88907-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-54n7v\" (UID: \"ffaf6469-19dc-47f9-a762-f02109d88907\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.023927 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.042657 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd00464d-5f76-4abd-8c83-ce0821d00dfa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-svj47\" (UID: \"bd00464d-5f76-4abd-8c83-ce0821d00dfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.063630 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqlln\" (UniqueName: \"kubernetes.io/projected/09c3c040-a9ac-441c-a1ce-b7d67233579a-kube-api-access-qqlln\") pod \"migrator-59844c95c7-w89rp\" (UID: \"09c3c040-a9ac-441c-a1ce-b7d67233579a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.065265 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxm22\" (UniqueName: \"kubernetes.io/projected/2678a257-f356-4cad-9ad5-22c264a8810f-kube-api-access-nxm22\") pod \"dns-default-wzqgc\" (UID: \"2678a257-f356-4cad-9ad5-22c264a8810f\") " pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.072831 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.078916 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.088579 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hscq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106030 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/214edff7-71c6-4f4c-b61a-5582ae5d49db-metrics-tls\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106106 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lc2b\" (UniqueName: \"kubernetes.io/projected/214edff7-71c6-4f4c-b61a-5582ae5d49db-kube-api-access-2lc2b\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106172 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51587e5b-a1ef-4fa1-bdce-fd9b96859790-proxy-tls\") pod \"machine-config-controller-84d6567774-8n798\" (UID: \"51587e5b-a1ef-4fa1-bdce-fd9b96859790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106196 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b050f320-6f26-4c79-88cc-ceb481369169-installation-pull-secrets\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106237 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/845499bb-3eca-40f2-8146-1c28421bb2a5-webhook-cert\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106298 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106331 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/214edff7-71c6-4f4c-b61a-5582ae5d49db-trusted-ca\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106422 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8gqb\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-kube-api-access-q8gqb\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106461 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-registry-certificates\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106667 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf769\" (UniqueName: \"kubernetes.io/projected/51587e5b-a1ef-4fa1-bdce-fd9b96859790-kube-api-access-bf769\") pod \"machine-config-controller-84d6567774-8n798\" (UID: \"51587e5b-a1ef-4fa1-bdce-fd9b96859790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106747 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-trusted-ca\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106772 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-registry-tls\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106787 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/845499bb-3eca-40f2-8146-1c28421bb2a5-apiservice-cert\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106806 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/214edff7-71c6-4f4c-b61a-5582ae5d49db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106830 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-bound-sa-token\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106877 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/845499bb-3eca-40f2-8146-1c28421bb2a5-tmpfs\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106925 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51587e5b-a1ef-4fa1-bdce-fd9b96859790-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8n798\" (UID: \"51587e5b-a1ef-4fa1-bdce-fd9b96859790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106958 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89b7c\" (UniqueName: \"kubernetes.io/projected/845499bb-3eca-40f2-8146-1c28421bb2a5-kube-api-access-89b7c\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.106975 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b050f320-6f26-4c79-88cc-ceb481369169-ca-trust-extracted\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.109246 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:21.60923169 +0000 UTC m=+246.688722231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.121716 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vdp5g"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.158201 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sh8rh"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.199870 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208080 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208488 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208622 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/214edff7-71c6-4f4c-b61a-5582ae5d49db-trusted-ca\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208649 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5jct\" (UniqueName: \"kubernetes.io/projected/4e1ce726-92a4-4cc3-bb03-077da188f56d-kube-api-access-k5jct\") pod \"service-ca-operator-777779d784-2cq6r\" (UID: \"4e1ce726-92a4-4cc3-bb03-077da188f56d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208668 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f49a746-d000-4ff2-b5e5-928854e1c0e1-srv-cert\") pod \"catalog-operator-68c6474976-4qcps\" (UID: \"8f49a746-d000-4ff2-b5e5-928854e1c0e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.208707 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:21.708684085 +0000 UTC m=+246.788174626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208807 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqdt\" (UniqueName: \"kubernetes.io/projected/90cb6740-847f-435b-a38f-6a199cd2a41d-kube-api-access-6kqdt\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208854 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a27cb800-961e-47ff-9558-47ec81e681a2-signing-cabundle\") pod \"service-ca-9c57cc56f-q68m5\" (UID: \"a27cb800-961e-47ff-9558-47ec81e681a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208887 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-plugins-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208925 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1616b423-1715-4def-8ed7-38a953361535-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r4pg9\" (UID: \"1616b423-1715-4def-8ed7-38a953361535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.208948 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4s49\" (UniqueName: \"kubernetes.io/projected/40df2e2c-714e-4938-ba94-896961568c4b-kube-api-access-t4s49\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdbcq\" (UID: \"40df2e2c-714e-4938-ba94-896961568c4b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209366 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-registration-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209417 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8gqb\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-kube-api-access-q8gqb\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209469 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-registry-certificates\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209525 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nc5k\" (UniqueName: \"kubernetes.io/projected/f2d7b80c-ea47-4fc1-a323-cab157b92d97-kube-api-access-2nc5k\") pod \"machine-config-server-vz6hr\" (UID: \"f2d7b80c-ea47-4fc1-a323-cab157b92d97\") " pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209571 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f4259a3-f3c2-4812-a4f8-6b5f206e9e00-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lpzjm\" (UID: \"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209607 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17437b5-ba61-4630-83bd-8436fcbd659f-config-volume\") pod \"collect-profiles-29535705-jc29q\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209679 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f2d7b80c-ea47-4fc1-a323-cab157b92d97-node-bootstrap-token\") pod \"machine-config-server-vz6hr\" (UID: \"f2d7b80c-ea47-4fc1-a323-cab157b92d97\") " pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209938 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1ce726-92a4-4cc3-bb03-077da188f56d-serving-cert\") pod \"service-ca-operator-777779d784-2cq6r\" (UID: \"4e1ce726-92a4-4cc3-bb03-077da188f56d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209975 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zct4x\" (UniqueName: \"kubernetes.io/projected/e17437b5-ba61-4630-83bd-8436fcbd659f-kube-api-access-zct4x\") pod \"collect-profiles-29535705-jc29q\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.209996 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q2jtw\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210026 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4pp\" (UniqueName: \"kubernetes.io/projected/a27cb800-961e-47ff-9558-47ec81e681a2-kube-api-access-jx4pp\") pod \"service-ca-9c57cc56f-q68m5\" (UID: \"a27cb800-961e-47ff-9558-47ec81e681a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210071 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf769\" (UniqueName: \"kubernetes.io/projected/51587e5b-a1ef-4fa1-bdce-fd9b96859790-kube-api-access-bf769\") pod \"machine-config-controller-84d6567774-8n798\" (UID: \"51587e5b-a1ef-4fa1-bdce-fd9b96859790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210088 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft88g\" (UniqueName: \"kubernetes.io/projected/e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05-kube-api-access-ft88g\") pod \"auto-csr-approver-29535718-4rxms\" (UID: \"e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05\") " pod="openshift-infra/auto-csr-approver-29535718-4rxms" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210105 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twmm\" (UniqueName: \"kubernetes.io/projected/ad057bd0-51b8-4b1a-b75a-c301e80942ab-kube-api-access-9twmm\") pod \"ingress-canary-qd595\" (UID: \"ad057bd0-51b8-4b1a-b75a-c301e80942ab\") " pod="openshift-ingress-canary/ingress-canary-qd595" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210177 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvp5h\" (UniqueName: \"kubernetes.io/projected/7bb9470b-f31b-4450-9a67-68e457292e83-kube-api-access-rvp5h\") pod \"multus-admission-controller-857f4d67dd-9nkmk\" (UID: \"7bb9470b-f31b-4450-9a67-68e457292e83\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210228 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-mountpoint-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210247 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnd9h\" (UniqueName: \"kubernetes.io/projected/8f49a746-d000-4ff2-b5e5-928854e1c0e1-kube-api-access-hnd9h\") pod \"catalog-operator-68c6474976-4qcps\" (UID: \"8f49a746-d000-4ff2-b5e5-928854e1c0e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210266 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a27cb800-961e-47ff-9558-47ec81e681a2-signing-key\") pod \"service-ca-9c57cc56f-q68m5\" (UID: \"a27cb800-961e-47ff-9558-47ec81e681a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210335 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-trusted-ca\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210356 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-registry-tls\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210375 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-bound-sa-token\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210392 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/845499bb-3eca-40f2-8146-1c28421bb2a5-apiservice-cert\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210407 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/214edff7-71c6-4f4c-b61a-5582ae5d49db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210424 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40df2e2c-714e-4938-ba94-896961568c4b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdbcq\" (UID: \"40df2e2c-714e-4938-ba94-896961568c4b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210440 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17437b5-ba61-4630-83bd-8436fcbd659f-secret-volume\") pod \"collect-profiles-29535705-jc29q\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210457 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-registry-certificates\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210486 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40df2e2c-714e-4938-ba94-896961568c4b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdbcq\" (UID: \"40df2e2c-714e-4938-ba94-896961568c4b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210578 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssdb8\" (UniqueName: \"kubernetes.io/projected/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-kube-api-access-ssdb8\") pod \"marketplace-operator-79b997595-q2jtw\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210612 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/845499bb-3eca-40f2-8146-1c28421bb2a5-tmpfs\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210655 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7bb9470b-f31b-4450-9a67-68e457292e83-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9nkmk\" (UID: \"7bb9470b-f31b-4450-9a67-68e457292e83\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210685 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-socket-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210703 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89b7c\" (UniqueName: \"kubernetes.io/projected/845499bb-3eca-40f2-8146-1c28421bb2a5-kube-api-access-89b7c\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210755 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51587e5b-a1ef-4fa1-bdce-fd9b96859790-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8n798\" (UID: \"51587e5b-a1ef-4fa1-bdce-fd9b96859790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210773 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvzb7\" (UniqueName: \"kubernetes.io/projected/1f4259a3-f3c2-4812-a4f8-6b5f206e9e00-kube-api-access-gvzb7\") pod \"olm-operator-6b444d44fb-lpzjm\" (UID: \"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210820 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b050f320-6f26-4c79-88cc-ceb481369169-ca-trust-extracted\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210840 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99d66ca3-61ae-4752-9130-2cdeb226a2f0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210858 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slm65\" (UniqueName: \"kubernetes.io/projected/1616b423-1715-4def-8ed7-38a953361535-kube-api-access-slm65\") pod \"package-server-manager-789f6589d5-r4pg9\" (UID: \"1616b423-1715-4def-8ed7-38a953361535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210875 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q2jtw\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210943 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/214edff7-71c6-4f4c-b61a-5582ae5d49db-trusted-ca\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.210950 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z76h5\" (UniqueName: \"kubernetes.io/projected/99d66ca3-61ae-4752-9130-2cdeb226a2f0-kube-api-access-z76h5\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.211046 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f4259a3-f3c2-4812-a4f8-6b5f206e9e00-srv-cert\") pod \"olm-operator-6b444d44fb-lpzjm\" (UID: \"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.211125 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99d66ca3-61ae-4752-9130-2cdeb226a2f0-proxy-tls\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.211190 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/214edff7-71c6-4f4c-b61a-5582ae5d49db-metrics-tls\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.211225 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-csi-data-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.212197 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lc2b\" (UniqueName: \"kubernetes.io/projected/214edff7-71c6-4f4c-b61a-5582ae5d49db-kube-api-access-2lc2b\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.212242 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b050f320-6f26-4c79-88cc-ceb481369169-installation-pull-secrets\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.212288 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51587e5b-a1ef-4fa1-bdce-fd9b96859790-proxy-tls\") pod \"machine-config-controller-84d6567774-8n798\" (UID: \"51587e5b-a1ef-4fa1-bdce-fd9b96859790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.212784 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-trusted-ca\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.213529 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/845499bb-3eca-40f2-8146-1c28421bb2a5-tmpfs\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.216655 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f2d7b80c-ea47-4fc1-a323-cab157b92d97-certs\") pod \"machine-config-server-vz6hr\" (UID: \"f2d7b80c-ea47-4fc1-a323-cab157b92d97\") " pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.216749 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad057bd0-51b8-4b1a-b75a-c301e80942ab-cert\") pod \"ingress-canary-qd595\" (UID: \"ad057bd0-51b8-4b1a-b75a-c301e80942ab\") " pod="openshift-ingress-canary/ingress-canary-qd595" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.216771 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/845499bb-3eca-40f2-8146-1c28421bb2a5-webhook-cert\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.216800 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/99d66ca3-61ae-4752-9130-2cdeb226a2f0-images\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.218034 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51587e5b-a1ef-4fa1-bdce-fd9b96859790-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8n798\" (UID: \"51587e5b-a1ef-4fa1-bdce-fd9b96859790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.218528 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.219024 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:21.719012331 +0000 UTC m=+246.798502872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.219815 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b050f320-6f26-4c79-88cc-ceb481369169-ca-trust-extracted\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.221084 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f49a746-d000-4ff2-b5e5-928854e1c0e1-profile-collector-cert\") pod \"catalog-operator-68c6474976-4qcps\" (UID: \"8f49a746-d000-4ff2-b5e5-928854e1c0e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.221141 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1ce726-92a4-4cc3-bb03-077da188f56d-config\") pod \"service-ca-operator-777779d784-2cq6r\" (UID: \"4e1ce726-92a4-4cc3-bb03-077da188f56d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.223407 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/214edff7-71c6-4f4c-b61a-5582ae5d49db-metrics-tls\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.225559 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/845499bb-3eca-40f2-8146-1c28421bb2a5-webhook-cert\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.226335 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-registry-tls\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.227454 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.228048 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/845499bb-3eca-40f2-8146-1c28421bb2a5-apiservice-cert\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.228311 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51587e5b-a1ef-4fa1-bdce-fd9b96859790-proxy-tls\") pod \"machine-config-controller-84d6567774-8n798\" (UID: \"51587e5b-a1ef-4fa1-bdce-fd9b96859790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.230943 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b050f320-6f26-4c79-88cc-ceb481369169-installation-pull-secrets\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.235848 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.241714 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kj9s2"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.247691 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8gqb\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-kube-api-access-q8gqb\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.257315 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.262859 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89b7c\" (UniqueName: \"kubernetes.io/projected/845499bb-3eca-40f2-8146-1c28421bb2a5-kube-api-access-89b7c\") pod \"packageserver-d55dfcdfc-4254t\" (UID: \"845499bb-3eca-40f2-8146-1c28421bb2a5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.263062 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.277023 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.296510 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf769\" (UniqueName: \"kubernetes.io/projected/51587e5b-a1ef-4fa1-bdce-fd9b96859790-kube-api-access-bf769\") pod \"machine-config-controller-84d6567774-8n798\" (UID: \"51587e5b-a1ef-4fa1-bdce-fd9b96859790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323627 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323760 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99d66ca3-61ae-4752-9130-2cdeb226a2f0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323784 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slm65\" (UniqueName: \"kubernetes.io/projected/1616b423-1715-4def-8ed7-38a953361535-kube-api-access-slm65\") pod \"package-server-manager-789f6589d5-r4pg9\" (UID: \"1616b423-1715-4def-8ed7-38a953361535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323802 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q2jtw\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323818 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z76h5\" (UniqueName: \"kubernetes.io/projected/99d66ca3-61ae-4752-9130-2cdeb226a2f0-kube-api-access-z76h5\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323836 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f4259a3-f3c2-4812-a4f8-6b5f206e9e00-srv-cert\") pod \"olm-operator-6b444d44fb-lpzjm\" (UID: \"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323852 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99d66ca3-61ae-4752-9130-2cdeb226a2f0-proxy-tls\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323869 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-csi-data-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323895 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f2d7b80c-ea47-4fc1-a323-cab157b92d97-certs\") pod \"machine-config-server-vz6hr\" (UID: \"f2d7b80c-ea47-4fc1-a323-cab157b92d97\") " pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323912 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad057bd0-51b8-4b1a-b75a-c301e80942ab-cert\") pod \"ingress-canary-qd595\" (UID: \"ad057bd0-51b8-4b1a-b75a-c301e80942ab\") " pod="openshift-ingress-canary/ingress-canary-qd595" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323930 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/99d66ca3-61ae-4752-9130-2cdeb226a2f0-images\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323956 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f49a746-d000-4ff2-b5e5-928854e1c0e1-profile-collector-cert\") pod \"catalog-operator-68c6474976-4qcps\" (UID: \"8f49a746-d000-4ff2-b5e5-928854e1c0e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323972 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1ce726-92a4-4cc3-bb03-077da188f56d-config\") pod \"service-ca-operator-777779d784-2cq6r\" (UID: \"4e1ce726-92a4-4cc3-bb03-077da188f56d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.323993 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5jct\" (UniqueName: \"kubernetes.io/projected/4e1ce726-92a4-4cc3-bb03-077da188f56d-kube-api-access-k5jct\") pod \"service-ca-operator-777779d784-2cq6r\" (UID: \"4e1ce726-92a4-4cc3-bb03-077da188f56d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.324024 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:21.823997834 +0000 UTC m=+246.903488375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324084 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f49a746-d000-4ff2-b5e5-928854e1c0e1-srv-cert\") pod \"catalog-operator-68c6474976-4qcps\" (UID: \"8f49a746-d000-4ff2-b5e5-928854e1c0e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324178 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kqdt\" (UniqueName: \"kubernetes.io/projected/90cb6740-847f-435b-a38f-6a199cd2a41d-kube-api-access-6kqdt\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324202 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-plugins-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324219 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a27cb800-961e-47ff-9558-47ec81e681a2-signing-cabundle\") pod \"service-ca-9c57cc56f-q68m5\" (UID: \"a27cb800-961e-47ff-9558-47ec81e681a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324258 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1616b423-1715-4def-8ed7-38a953361535-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r4pg9\" (UID: \"1616b423-1715-4def-8ed7-38a953361535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324279 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-registration-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324299 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4s49\" (UniqueName: \"kubernetes.io/projected/40df2e2c-714e-4938-ba94-896961568c4b-kube-api-access-t4s49\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdbcq\" (UID: \"40df2e2c-714e-4938-ba94-896961568c4b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324353 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nc5k\" (UniqueName: \"kubernetes.io/projected/f2d7b80c-ea47-4fc1-a323-cab157b92d97-kube-api-access-2nc5k\") pod \"machine-config-server-vz6hr\" (UID: \"f2d7b80c-ea47-4fc1-a323-cab157b92d97\") " pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324360 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99d66ca3-61ae-4752-9130-2cdeb226a2f0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324376 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f4259a3-f3c2-4812-a4f8-6b5f206e9e00-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lpzjm\" (UID: \"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324427 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17437b5-ba61-4630-83bd-8436fcbd659f-config-volume\") pod \"collect-profiles-29535705-jc29q\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324450 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f2d7b80c-ea47-4fc1-a323-cab157b92d97-node-bootstrap-token\") pod \"machine-config-server-vz6hr\" (UID: \"f2d7b80c-ea47-4fc1-a323-cab157b92d97\") " pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324516 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1ce726-92a4-4cc3-bb03-077da188f56d-serving-cert\") pod \"service-ca-operator-777779d784-2cq6r\" (UID: \"4e1ce726-92a4-4cc3-bb03-077da188f56d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324544 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zct4x\" (UniqueName: \"kubernetes.io/projected/e17437b5-ba61-4630-83bd-8436fcbd659f-kube-api-access-zct4x\") pod \"collect-profiles-29535705-jc29q\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324561 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q2jtw\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324578 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx4pp\" (UniqueName: \"kubernetes.io/projected/a27cb800-961e-47ff-9558-47ec81e681a2-kube-api-access-jx4pp\") pod \"service-ca-9c57cc56f-q68m5\" (UID: \"a27cb800-961e-47ff-9558-47ec81e681a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324596 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft88g\" (UniqueName: \"kubernetes.io/projected/e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05-kube-api-access-ft88g\") pod \"auto-csr-approver-29535718-4rxms\" (UID: \"e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05\") " pod="openshift-infra/auto-csr-approver-29535718-4rxms" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324614 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9twmm\" (UniqueName: \"kubernetes.io/projected/ad057bd0-51b8-4b1a-b75a-c301e80942ab-kube-api-access-9twmm\") pod \"ingress-canary-qd595\" (UID: \"ad057bd0-51b8-4b1a-b75a-c301e80942ab\") " pod="openshift-ingress-canary/ingress-canary-qd595" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324653 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvp5h\" (UniqueName: \"kubernetes.io/projected/7bb9470b-f31b-4450-9a67-68e457292e83-kube-api-access-rvp5h\") pod \"multus-admission-controller-857f4d67dd-9nkmk\" (UID: \"7bb9470b-f31b-4450-9a67-68e457292e83\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324669 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-mountpoint-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324693 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnd9h\" (UniqueName: \"kubernetes.io/projected/8f49a746-d000-4ff2-b5e5-928854e1c0e1-kube-api-access-hnd9h\") pod \"catalog-operator-68c6474976-4qcps\" (UID: \"8f49a746-d000-4ff2-b5e5-928854e1c0e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324710 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a27cb800-961e-47ff-9558-47ec81e681a2-signing-key\") pod \"service-ca-9c57cc56f-q68m5\" (UID: \"a27cb800-961e-47ff-9558-47ec81e681a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324732 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40df2e2c-714e-4938-ba94-896961568c4b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdbcq\" (UID: \"40df2e2c-714e-4938-ba94-896961568c4b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324748 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17437b5-ba61-4630-83bd-8436fcbd659f-secret-volume\") pod \"collect-profiles-29535705-jc29q\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324776 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40df2e2c-714e-4938-ba94-896961568c4b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdbcq\" (UID: \"40df2e2c-714e-4938-ba94-896961568c4b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324797 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssdb8\" (UniqueName: \"kubernetes.io/projected/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-kube-api-access-ssdb8\") pod \"marketplace-operator-79b997595-q2jtw\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324815 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7bb9470b-f31b-4450-9a67-68e457292e83-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9nkmk\" (UID: \"7bb9470b-f31b-4450-9a67-68e457292e83\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324833 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-socket-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.324848 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvzb7\" (UniqueName: \"kubernetes.io/projected/1f4259a3-f3c2-4812-a4f8-6b5f206e9e00-kube-api-access-gvzb7\") pod \"olm-operator-6b444d44fb-lpzjm\" (UID: \"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.326488 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q2jtw\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.326492 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/99d66ca3-61ae-4752-9130-2cdeb226a2f0-images\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.326593 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-mountpoint-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.326743 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-plugins-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.327351 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a27cb800-961e-47ff-9558-47ec81e681a2-signing-cabundle\") pod \"service-ca-9c57cc56f-q68m5\" (UID: \"a27cb800-961e-47ff-9558-47ec81e681a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.327531 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-bound-sa-token\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.327942 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17437b5-ba61-4630-83bd-8436fcbd659f-config-volume\") pod \"collect-profiles-29535705-jc29q\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.328100 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1ce726-92a4-4cc3-bb03-077da188f56d-config\") pod \"service-ca-operator-777779d784-2cq6r\" (UID: \"4e1ce726-92a4-4cc3-bb03-077da188f56d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.329076 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-csi-data-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.329252 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-registration-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.329933 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90cb6740-847f-435b-a38f-6a199cd2a41d-socket-dir\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.330849 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40df2e2c-714e-4938-ba94-896961568c4b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdbcq\" (UID: \"40df2e2c-714e-4938-ba94-896961568c4b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.347409 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q2jtw\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.348011 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f4259a3-f3c2-4812-a4f8-6b5f206e9e00-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lpzjm\" (UID: \"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.353111 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f2d7b80c-ea47-4fc1-a323-cab157b92d97-certs\") pod \"machine-config-server-vz6hr\" (UID: \"f2d7b80c-ea47-4fc1-a323-cab157b92d97\") " pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.360461 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1ce726-92a4-4cc3-bb03-077da188f56d-serving-cert\") pod \"service-ca-operator-777779d784-2cq6r\" (UID: \"4e1ce726-92a4-4cc3-bb03-077da188f56d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.360669 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17437b5-ba61-4630-83bd-8436fcbd659f-secret-volume\") pod \"collect-profiles-29535705-jc29q\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.360659 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40df2e2c-714e-4938-ba94-896961568c4b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdbcq\" (UID: \"40df2e2c-714e-4938-ba94-896961568c4b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.360712 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f4259a3-f3c2-4812-a4f8-6b5f206e9e00-srv-cert\") pod \"olm-operator-6b444d44fb-lpzjm\" (UID: \"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.360949 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1616b423-1715-4def-8ed7-38a953361535-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r4pg9\" (UID: \"1616b423-1715-4def-8ed7-38a953361535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.361222 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99d66ca3-61ae-4752-9130-2cdeb226a2f0-proxy-tls\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.361344 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/214edff7-71c6-4f4c-b61a-5582ae5d49db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.364595 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7bb9470b-f31b-4450-9a67-68e457292e83-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9nkmk\" (UID: \"7bb9470b-f31b-4450-9a67-68e457292e83\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.364702 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f49a746-d000-4ff2-b5e5-928854e1c0e1-profile-collector-cert\") pod \"catalog-operator-68c6474976-4qcps\" (UID: \"8f49a746-d000-4ff2-b5e5-928854e1c0e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.364812 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad057bd0-51b8-4b1a-b75a-c301e80942ab-cert\") pod \"ingress-canary-qd595\" (UID: \"ad057bd0-51b8-4b1a-b75a-c301e80942ab\") " pod="openshift-ingress-canary/ingress-canary-qd595" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.365153 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a27cb800-961e-47ff-9558-47ec81e681a2-signing-key\") pod \"service-ca-9c57cc56f-q68m5\" (UID: \"a27cb800-961e-47ff-9558-47ec81e681a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.366258 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f49a746-d000-4ff2-b5e5-928854e1c0e1-srv-cert\") pod \"catalog-operator-68c6474976-4qcps\" (UID: \"8f49a746-d000-4ff2-b5e5-928854e1c0e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.372999 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f2d7b80c-ea47-4fc1-a323-cab157b92d97-node-bootstrap-token\") pod \"machine-config-server-vz6hr\" (UID: \"f2d7b80c-ea47-4fc1-a323-cab157b92d97\") " pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.384915 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.388357 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5jct\" (UniqueName: \"kubernetes.io/projected/4e1ce726-92a4-4cc3-bb03-077da188f56d-kube-api-access-k5jct\") pod \"service-ca-operator-777779d784-2cq6r\" (UID: \"4e1ce726-92a4-4cc3-bb03-077da188f56d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.388998 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lc2b\" (UniqueName: \"kubernetes.io/projected/214edff7-71c6-4f4c-b61a-5582ae5d49db-kube-api-access-2lc2b\") pod \"ingress-operator-5b745b69d9-jgdt5\" (UID: \"214edff7-71c6-4f4c-b61a-5582ae5d49db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.394678 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dgm55"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.425999 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.426514 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:21.926501157 +0000 UTC m=+247.005991698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.426923 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.431487 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slm65\" (UniqueName: \"kubernetes.io/projected/1616b423-1715-4def-8ed7-38a953361535-kube-api-access-slm65\") pod \"package-server-manager-789f6589d5-r4pg9\" (UID: \"1616b423-1715-4def-8ed7-38a953361535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.451992 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9twmm\" (UniqueName: \"kubernetes.io/projected/ad057bd0-51b8-4b1a-b75a-c301e80942ab-kube-api-access-9twmm\") pod \"ingress-canary-qd595\" (UID: \"ad057bd0-51b8-4b1a-b75a-c301e80942ab\") " pod="openshift-ingress-canary/ingress-canary-qd595" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.476693 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx4pp\" (UniqueName: \"kubernetes.io/projected/a27cb800-961e-47ff-9558-47ec81e681a2-kube-api-access-jx4pp\") pod \"service-ca-9c57cc56f-q68m5\" (UID: \"a27cb800-961e-47ff-9558-47ec81e681a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.478332 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvzb7\" (UniqueName: \"kubernetes.io/projected/1f4259a3-f3c2-4812-a4f8-6b5f206e9e00-kube-api-access-gvzb7\") pod \"olm-operator-6b444d44fb-lpzjm\" (UID: \"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.478502 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qd595" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.505629 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvp5h\" (UniqueName: \"kubernetes.io/projected/7bb9470b-f31b-4450-9a67-68e457292e83-kube-api-access-rvp5h\") pod \"multus-admission-controller-857f4d67dd-9nkmk\" (UID: \"7bb9470b-f31b-4450-9a67-68e457292e83\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.508687 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft88g\" (UniqueName: \"kubernetes.io/projected/e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05-kube-api-access-ft88g\") pod \"auto-csr-approver-29535718-4rxms\" (UID: \"e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05\") " pod="openshift-infra/auto-csr-approver-29535718-4rxms" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.517268 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.529533 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.529933 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.029918355 +0000 UTC m=+247.109408896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:21 crc kubenswrapper[4910]: W0226 21:59:21.530590 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e5e40d7_f505_4b8d_ac40_9677a7ebe781.slice/crio-3cdb44c94b79a708e39220cd90f78d56bceefb9ce90ba7be17eb788b22abcb4b WatchSource:0}: Error finding container 3cdb44c94b79a708e39220cd90f78d56bceefb9ce90ba7be17eb788b22abcb4b: Status 404 returned error can't find the container with id 3cdb44c94b79a708e39220cd90f78d56bceefb9ce90ba7be17eb788b22abcb4b Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.532908 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kqdt\" (UniqueName: \"kubernetes.io/projected/90cb6740-847f-435b-a38f-6a199cd2a41d-kube-api-access-6kqdt\") pod \"csi-hostpathplugin-8h9hc\" (UID: \"90cb6740-847f-435b-a38f-6a199cd2a41d\") " pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.548341 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.554058 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnd9h\" (UniqueName: \"kubernetes.io/projected/8f49a746-d000-4ff2-b5e5-928854e1c0e1-kube-api-access-hnd9h\") pod \"catalog-operator-68c6474976-4qcps\" (UID: \"8f49a746-d000-4ff2-b5e5-928854e1c0e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.571868 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kj9s2" event={"ID":"acb5ada5-3567-4f1c-9130-1e78f3e88975","Type":"ContainerStarted","Data":"a988eb6a453f5fac8bb35b26b4360f3ba40aea0014cdba9f451eee44b2bff031"} Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.598965 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.600147 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.600824 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z76h5\" (UniqueName: \"kubernetes.io/projected/99d66ca3-61ae-4752-9130-2cdeb226a2f0-kube-api-access-z76h5\") pod \"machine-config-operator-74547568cd-g8lnj\" (UID: \"99d66ca3-61ae-4752-9130-2cdeb226a2f0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.606824 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" event={"ID":"1db40f1b-b714-4920-8b27-f350b3dd2978","Type":"ContainerStarted","Data":"30952eee981e461b407bf88f0a62e1f605b0f14e345fd1402301d26f49678671"} Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.631997 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.632124 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zct4x\" (UniqueName: \"kubernetes.io/projected/e17437b5-ba61-4630-83bd-8436fcbd659f-kube-api-access-zct4x\") pod \"collect-profiles-29535705-jc29q\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.632330 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.132316804 +0000 UTC m=+247.211807395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.632748 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535718-4rxms" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.635002 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.635347 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" event={"ID":"0b3633c0-54b9-486c-a14b-99b6e5c04765","Type":"ContainerStarted","Data":"00aef2f644930da530bf887eee56cf21dd9b85724bd6b95272478579d841e225"} Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.642185 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.642705 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" event={"ID":"a46ac2a0-b244-4742-ade1-cd57ce2e87d5","Type":"ContainerStarted","Data":"455713bcd617a89af9765f9a4ef542ccbef523375cfbe9f26a3b94defac3b840"} Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.642899 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.646382 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" event={"ID":"79b92949-b7ec-4d5c-a27e-259972a4a4dd","Type":"ContainerStarted","Data":"733232fadc7438820250240834373877831edf9942257129592d8cbc0db2b405"} Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.649569 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssdb8\" (UniqueName: \"kubernetes.io/projected/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-kube-api-access-ssdb8\") pod \"marketplace-operator-79b997595-q2jtw\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.652281 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" event={"ID":"1bb89394-7073-4408-a891-f4a6eb44eaa7","Type":"ContainerStarted","Data":"33c5c3c47d0d1c036fbb162f7305b39ddc648cbc7b94619ac9873bbf4e040c12"} Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.652317 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" event={"ID":"1bb89394-7073-4408-a891-f4a6eb44eaa7","Type":"ContainerStarted","Data":"d0c95881953018fd5c715e0c86f7be631f5f78718f40d13665d3b8ee426917a1"} Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.653973 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" event={"ID":"651f9cbc-e905-462d-b42f-84d2a642169d","Type":"ContainerStarted","Data":"2dca74e8a95e86c76dd3d22df259019cc18b45370c6ecc216d91421de1615364"} Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.661046 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.654962 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4s49\" (UniqueName: \"kubernetes.io/projected/40df2e2c-714e-4938-ba94-896961568c4b-kube-api-access-t4s49\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdbcq\" (UID: \"40df2e2c-714e-4938-ba94-896961568c4b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.666053 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nc5k\" (UniqueName: \"kubernetes.io/projected/f2d7b80c-ea47-4fc1-a323-cab157b92d97-kube-api-access-2nc5k\") pod \"machine-config-server-vz6hr\" (UID: \"f2d7b80c-ea47-4fc1-a323-cab157b92d97\") " pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.670500 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.671187 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.671880 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.691397 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.699223 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.699537 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.701812 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.733515 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.734582 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.735606 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.235591599 +0000 UTC m=+247.315082130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.743338 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.749123 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hscq"] Feb 26 21:59:21 crc kubenswrapper[4910]: W0226 21:59:21.751299 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63954f23_8000_4ada_8d5d_67297b7c26f6.slice/crio-cf9d868c1317197717720d82a88add65c9f7966a288630fdb7f8fdcbb550e1f4 WatchSource:0}: Error finding container cf9d868c1317197717720d82a88add65c9f7966a288630fdb7f8fdcbb550e1f4: Status 404 returned error can't find the container with id cf9d868c1317197717720d82a88add65c9f7966a288630fdb7f8fdcbb550e1f4 Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.762580 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vz6hr" Feb 26 21:59:21 crc kubenswrapper[4910]: W0226 21:59:21.815585 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb80ebfa_1dd8_40e6_9d5e_27311836ccfb.slice/crio-bf0a3bb5414903fa899ce6e527e8d17918feefbfd8e3a96624efa8d20ce64ef7 WatchSource:0}: Error finding container bf0a3bb5414903fa899ce6e527e8d17918feefbfd8e3a96624efa8d20ce64ef7: Status 404 returned error can't find the container with id bf0a3bb5414903fa899ce6e527e8d17918feefbfd8e3a96624efa8d20ce64ef7 Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.836746 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.837135 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.337118524 +0000 UTC m=+247.416609065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.870625 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5"] Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.893015 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" Feb 26 21:59:21 crc kubenswrapper[4910]: I0226 21:59:21.937769 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:21 crc kubenswrapper[4910]: E0226 21:59:21.938031 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.43801781 +0000 UTC m=+247.517508351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.025983 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-c7wmc"] Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.039610 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.040176 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.540151392 +0000 UTC m=+247.619641933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.056882 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7g2dk"] Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.106996 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp"] Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.123469 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v"] Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.140677 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.141114 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.64109847 +0000 UTC m=+247.720589001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.197672 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn"] Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.241761 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.242015 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.742004786 +0000 UTC m=+247.821495327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.344501 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.345068 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.845052964 +0000 UTC m=+247.924543505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.362489 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp"] Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.446096 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.459703 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:22.959683054 +0000 UTC m=+248.039173595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.562305 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.562706 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.062679901 +0000 UTC m=+248.142170442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.601039 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wzqgc"] Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.628256 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" podStartSLOduration=177.628241633 podStartE2EDuration="2m57.628241633s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:22.627627315 +0000 UTC m=+247.707117856" watchObservedRunningTime="2026-02-26 21:59:22.628241633 +0000 UTC m=+247.707732174" Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.653783 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5"] Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.663286 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.663611 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.163599998 +0000 UTC m=+248.243090539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.677881 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-c7wmc" event={"ID":"17c9e844-630f-46d5-a08c-94a6d0b56404","Type":"ContainerStarted","Data":"b6e74214cb16a562b685d427e550708fccb67b4b436d7bfe6f5509512af8da4d"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.680709 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" event={"ID":"9321ff73-5107-4139-ad6f-622b13de5cd1","Type":"ContainerStarted","Data":"47d0eeac4e89adf9ff5e0066dbd8a4c39842ce210bf573de499fdbc2c32d0d97"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.693710 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" event={"ID":"1db40f1b-b714-4920-8b27-f350b3dd2978","Type":"ContainerStarted","Data":"a4dd97c5e3d91d1472503138bfc8f6c78af6a7b6ec90087bf66091422fddf6f8"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.696515 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hscq" event={"ID":"7843f81a-d6bd-463f-b5b7-454e3f943ed8","Type":"ContainerStarted","Data":"7d2e9749388fe5a171a04b4e6156078ca4ec1091ea16bddbddb48433df483e8c"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.716011 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vz6hr" event={"ID":"f2d7b80c-ea47-4fc1-a323-cab157b92d97","Type":"ContainerStarted","Data":"d6a974418d467565e9be34d688cc9fcfd890cadc7cd83bb1d8b49e281644b71f"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.755453 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kj9s2" event={"ID":"acb5ada5-3567-4f1c-9130-1e78f3e88975","Type":"ContainerStarted","Data":"71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.764587 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.766368 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.266347387 +0000 UTC m=+248.345837938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.772860 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" event={"ID":"79b92949-b7ec-4d5c-a27e-259972a4a4dd","Type":"ContainerStarted","Data":"d9d38b7027075bb39725157cc948d2570b0892b75671aad81b216ee6621edb28"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.775623 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" event={"ID":"1bb89394-7073-4408-a891-f4a6eb44eaa7","Type":"ContainerStarted","Data":"5bdde43b42dcff1c2add0343b38bcb8bcf2cc208fafce05534a89d86cfdb86da"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.788762 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7mg2" podStartSLOduration=178.7887437 podStartE2EDuration="2m58.7887437s" podCreationTimestamp="2026-02-26 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:22.783000155 +0000 UTC m=+247.862490696" watchObservedRunningTime="2026-02-26 21:59:22.7887437 +0000 UTC m=+247.868234261" Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.803987 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" event={"ID":"30057c46-9a0f-4a04-869a-c63eac9a84f6","Type":"ContainerStarted","Data":"674b757c98da868659443633fabfa00ceaafd4d8b30c2075ec3280934a7cda6c"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.808221 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" event={"ID":"ab54c730-c74c-4988-ac00-e926c9907435","Type":"ContainerStarted","Data":"de4a16984791a24cbf23b0217632e1d0be38108275eff61a1f9186a3113d77f5"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.809900 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" event={"ID":"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb","Type":"ContainerStarted","Data":"bf0a3bb5414903fa899ce6e527e8d17918feefbfd8e3a96624efa8d20ce64ef7"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.812862 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" event={"ID":"a46ac2a0-b244-4742-ade1-cd57ce2e87d5","Type":"ContainerStarted","Data":"d44930030802fd17bca6694e834324618bc0d3d773ec97b1482af2482b558f68"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.819059 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" event={"ID":"4ea4294a-f91c-4d13-8372-b1e8b7a73831","Type":"ContainerStarted","Data":"fdd3b45fb3b0fb5eb6917fccadb0d90b95d84e9947a0ffb867d7d93f25e384a6"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.821513 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp" event={"ID":"09c3c040-a9ac-441c-a1ce-b7d67233579a","Type":"ContainerStarted","Data":"2ced404468e546fdd481f014ce8099ac193b96ee8b69150bcc74227232f84c5b"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.829531 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qd595"] Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.837230 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" event={"ID":"ffaf6469-19dc-47f9-a762-f02109d88907","Type":"ContainerStarted","Data":"dc1014c900cc34528047ebcdd04739408f6948475d7c873454ffb7a9099ec48a"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.840782 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" event={"ID":"63954f23-8000-4ada-8d5d-67297b7c26f6","Type":"ContainerStarted","Data":"d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.840821 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" event={"ID":"63954f23-8000-4ada-8d5d-67297b7c26f6","Type":"ContainerStarted","Data":"cf9d868c1317197717720d82a88add65c9f7966a288630fdb7f8fdcbb550e1f4"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.841622 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.844239 4910 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zrpl2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.844276 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" podUID="63954f23-8000-4ada-8d5d-67297b7c26f6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.848095 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" event={"ID":"7e5e40d7-f505-4b8d-ac40-9677a7ebe781","Type":"ContainerStarted","Data":"18c97cd3a78a4876b3146bf3046059f9bc0467ab672c8a7ff26c1677afa84f74"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.848131 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" event={"ID":"7e5e40d7-f505-4b8d-ac40-9677a7ebe781","Type":"ContainerStarted","Data":"3cdb44c94b79a708e39220cd90f78d56bceefb9ce90ba7be17eb788b22abcb4b"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.867597 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.880743 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.380727831 +0000 UTC m=+248.460218372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: W0226 21:59:22.905317 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2678a257_f356_4cad_9ad5_22c264a8810f.slice/crio-8d4377bc87ae836a0b076f7b6933b0af0518b616ea3f61c287acef0469eb8f04 WatchSource:0}: Error finding container 8d4377bc87ae836a0b076f7b6933b0af0518b616ea3f61c287acef0469eb8f04: Status 404 returned error can't find the container with id 8d4377bc87ae836a0b076f7b6933b0af0518b616ea3f61c287acef0469eb8f04 Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.906300 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" event={"ID":"651f9cbc-e905-462d-b42f-84d2a642169d","Type":"ContainerStarted","Data":"856667c991395e8588fa3a5b0c3bfd71c1ad8cf50549fc552fa6606a7404d79a"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.931831 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.938898 4910 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dgm55 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" start-of-body= Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.938977 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" podUID="0b3633c0-54b9-486c-a14b-99b6e5c04765" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.958610 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-w8g2c" event={"ID":"ee0c3a2c-59c9-4f63-93c9-94c498a8d065","Type":"ContainerStarted","Data":"2aaf5641c2cadf7f561a0f67793b9edce957be860868c2c8afcaa305a2363b6c"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.958686 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-w8g2c" event={"ID":"ee0c3a2c-59c9-4f63-93c9-94c498a8d065","Type":"ContainerStarted","Data":"fc4fe5f1fb413cce35dd5f8f5d82792dcd4fa243028cfaf1f6fc6ae632298481"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.965791 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" event={"ID":"26105f09-3245-4c15-ba60-54f31690d926","Type":"ContainerStarted","Data":"3e09d1c06cf2020ec358f858fb98e94264fea47c6db6f172c70fce99545a4fdc"} Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.977533 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.977636 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.477620372 +0000 UTC m=+248.557110913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.977894 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:22 crc kubenswrapper[4910]: E0226 21:59:22.978784 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.478765625 +0000 UTC m=+248.558256166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:22 crc kubenswrapper[4910]: I0226 21:59:22.993790 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" event={"ID":"ebb647d3-e0ce-4122-8582-d1a2d4d2c594","Type":"ContainerStarted","Data":"3aaf0d469fd2b8e2bfb3a2a15a5f9f8360344c85e2661e7e8d28811c6ef86848"} Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.084066 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:23 crc kubenswrapper[4910]: E0226 21:59:23.084384 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.584358425 +0000 UTC m=+248.663848966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.186919 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:23 crc kubenswrapper[4910]: E0226 21:59:23.187652 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.68764007 +0000 UTC m=+248.767130611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.274724 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" podStartSLOduration=179.274702519 podStartE2EDuration="2m59.274702519s" podCreationTimestamp="2026-02-26 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:23.263130957 +0000 UTC m=+248.342621498" watchObservedRunningTime="2026-02-26 21:59:23.274702519 +0000 UTC m=+248.354193060" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.285423 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.285814 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.285853 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.289046 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:23 crc kubenswrapper[4910]: E0226 21:59:23.289424 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.789408822 +0000 UTC m=+248.868899363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.376443 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-kj9s2" podStartSLOduration=178.37642892 podStartE2EDuration="2m58.37642892s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:23.375606556 +0000 UTC m=+248.455097097" watchObservedRunningTime="2026-02-26 21:59:23.37642892 +0000 UTC m=+248.455919461" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.390998 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:23 crc kubenswrapper[4910]: E0226 21:59:23.391346 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.891335488 +0000 UTC m=+248.970826029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.429007 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-w8g2c" podStartSLOduration=178.428984619 podStartE2EDuration="2m58.428984619s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:23.406329848 +0000 UTC m=+248.485820389" watchObservedRunningTime="2026-02-26 21:59:23.428984619 +0000 UTC m=+248.508475170" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.437331 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vdp5g" podStartSLOduration=179.437310568 podStartE2EDuration="2m59.437310568s" podCreationTimestamp="2026-02-26 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:23.420953518 +0000 UTC m=+248.500444069" watchObservedRunningTime="2026-02-26 21:59:23.437310568 +0000 UTC m=+248.516801109" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.457672 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8n798"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.478125 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8h9hc"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.491557 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:23 crc kubenswrapper[4910]: E0226 21:59:23.491936 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:23.991923516 +0000 UTC m=+249.071414057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.495264 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" podStartSLOduration=178.495248451 podStartE2EDuration="2m58.495248451s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:23.486809168 +0000 UTC m=+248.566299709" watchObservedRunningTime="2026-02-26 21:59:23.495248451 +0000 UTC m=+248.574738982" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.520573 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" podStartSLOduration=178.520555177 podStartE2EDuration="2m58.520555177s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:23.511240049 +0000 UTC m=+248.590730600" watchObservedRunningTime="2026-02-26 21:59:23.520555177 +0000 UTC m=+248.600045708" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.522733 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.522796 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.549874 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.552799 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.553060 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" podStartSLOduration=179.55305274 podStartE2EDuration="2m59.55305274s" podCreationTimestamp="2026-02-26 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:23.541974881 +0000 UTC m=+248.621465422" watchObservedRunningTime="2026-02-26 21:59:23.55305274 +0000 UTC m=+248.632543281" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.571233 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-q68m5"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.586996 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9nkmk"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.595607 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-nxzt6" podStartSLOduration=178.595591851 podStartE2EDuration="2m58.595591851s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:23.594706616 +0000 UTC m=+248.674197157" watchObservedRunningTime="2026-02-26 21:59:23.595591851 +0000 UTC m=+248.675082392" Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.597462 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:23 crc kubenswrapper[4910]: E0226 21:59:23.597832 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:24.097821425 +0000 UTC m=+249.177311966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.611572 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.619635 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q2jtw"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.670253 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.699299 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:23 crc kubenswrapper[4910]: E0226 21:59:23.699688 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:24.199674139 +0000 UTC m=+249.279164680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.701211 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535718-4rxms"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.713352 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.747429 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj"] Feb 26 21:59:23 crc kubenswrapper[4910]: W0226 21:59:23.784497 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode17437b5_ba61_4630_83bd_8436fcbd659f.slice/crio-d13b09221e64cabe66937ad00a5d1cc73c64702332011953b2398710d7ee513a WatchSource:0}: Error finding container d13b09221e64cabe66937ad00a5d1cc73c64702332011953b2398710d7ee513a: Status 404 returned error can't find the container with id d13b09221e64cabe66937ad00a5d1cc73c64702332011953b2398710d7ee513a Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.812491 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm"] Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.812773 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:23 crc kubenswrapper[4910]: E0226 21:59:23.813236 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:24.313216428 +0000 UTC m=+249.392706969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.915780 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:23 crc kubenswrapper[4910]: E0226 21:59:23.916111 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:24.41609501 +0000 UTC m=+249.495585551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:23 crc kubenswrapper[4910]: I0226 21:59:23.917389 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.017225 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.017575 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:24.517561534 +0000 UTC m=+249.597052075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.081577 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" event={"ID":"e17437b5-ba61-4630-83bd-8436fcbd659f","Type":"ContainerStarted","Data":"d13b09221e64cabe66937ad00a5d1cc73c64702332011953b2398710d7ee513a"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.092072 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hscq" event={"ID":"7843f81a-d6bd-463f-b5b7-454e3f943ed8","Type":"ContainerStarted","Data":"b67494a7dc026793d9c140f1c8656ac9a9b78e9e6e6f1b9414fa54487b455a9c"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.093064 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2hscq" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.099644 4910 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hscq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.099680 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hscq" podUID="7843f81a-d6bd-463f-b5b7-454e3f943ed8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.104658 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" event={"ID":"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00","Type":"ContainerStarted","Data":"4e315ed37259aa00759fff9b8025846ce1d941b2fdd99c4124994b1f439a1cde"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.114353 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2hscq" podStartSLOduration=179.114333371 podStartE2EDuration="2m59.114333371s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.113818637 +0000 UTC m=+249.193309178" watchObservedRunningTime="2026-02-26 21:59:24.114333371 +0000 UTC m=+249.193823912" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.127947 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.128310 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:24.628296872 +0000 UTC m=+249.707787413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.177527 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" event={"ID":"a27cb800-961e-47ff-9558-47ec81e681a2","Type":"ContainerStarted","Data":"eef31504bc5d3c7dc18fc86ef92c25eaabc2ca1d986d84874b83335a293c3758"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.182242 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp" event={"ID":"09c3c040-a9ac-441c-a1ce-b7d67233579a","Type":"ContainerStarted","Data":"d9979be333bd8ee236571b9a7d6b3af7565a1bb420fe71730295a18cf7fb37a8"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.185587 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" event={"ID":"ffaf6469-19dc-47f9-a762-f02109d88907","Type":"ContainerStarted","Data":"5499acea7ab10b7595461a11c126cba9c82bee4cde47986e0ac904eedd07215f"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.205914 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-54n7v" podStartSLOduration=179.205898739 podStartE2EDuration="2m59.205898739s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.203481911 +0000 UTC m=+249.282972462" watchObservedRunningTime="2026-02-26 21:59:24.205898739 +0000 UTC m=+249.285389280" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.217088 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" event={"ID":"4e1ce726-92a4-4cc3-bb03-077da188f56d","Type":"ContainerStarted","Data":"9ed449b6ce3949f04e5e94ef3db11fa49e3cb8ee1744ec8d71c320e6088753a6"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.229012 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.229436 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:24.729425585 +0000 UTC m=+249.808916126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.248642 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" event={"ID":"51587e5b-a1ef-4fa1-bdce-fd9b96859790","Type":"ContainerStarted","Data":"4da9ce2feae50f0a377cdc2666c9f827adcb3488f92e550d33a96090b3280140"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.259124 4910 generic.go:334] "Generic (PLEG): container finished" podID="7e5e40d7-f505-4b8d-ac40-9677a7ebe781" containerID="18c97cd3a78a4876b3146bf3046059f9bc0467ab672c8a7ff26c1677afa84f74" exitCode=0 Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.259211 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" event={"ID":"7e5e40d7-f505-4b8d-ac40-9677a7ebe781","Type":"ContainerDied","Data":"18c97cd3a78a4876b3146bf3046059f9bc0467ab672c8a7ff26c1677afa84f74"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.264011 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" event={"ID":"bd00464d-5f76-4abd-8c83-ce0821d00dfa","Type":"ContainerStarted","Data":"2af0009bab18d9da96d4d60ec72a2fecdaf18c5d825bddd0b004c94b20f39858"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.273045 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" event={"ID":"99d66ca3-61ae-4752-9130-2cdeb226a2f0","Type":"ContainerStarted","Data":"246e4e40e5601f8998437e8e3753d301b617da1f25e814d990233bed04f50c24"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.284151 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" event={"ID":"8f49a746-d000-4ff2-b5e5-928854e1c0e1","Type":"ContainerStarted","Data":"d692470f641726a297940fa6f52164c65b119e695857ad8b4232ac9ba2ca2f99"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.284722 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.292096 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 21:59:24 crc kubenswrapper[4910]: [-]has-synced failed: reason withheld Feb 26 21:59:24 crc kubenswrapper[4910]: [+]process-running ok Feb 26 21:59:24 crc kubenswrapper[4910]: healthz check failed Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.292149 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.298392 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wzqgc" event={"ID":"2678a257-f356-4cad-9ad5-22c264a8810f","Type":"ContainerStarted","Data":"b21f6025d40d5db9efe6abf625864c1240989092bde6245794cf04de6feab4c4"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.298429 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wzqgc" event={"ID":"2678a257-f356-4cad-9ad5-22c264a8810f","Type":"ContainerStarted","Data":"8d4377bc87ae836a0b076f7b6933b0af0518b616ea3f61c287acef0469eb8f04"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.306498 4910 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4qcps container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.306549 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" podUID="8f49a746-d000-4ff2-b5e5-928854e1c0e1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.311802 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" event={"ID":"90cb6740-847f-435b-a38f-6a199cd2a41d","Type":"ContainerStarted","Data":"0c9e0556921619c31088028a7b318dbd79cb26e219acc1eea26d14ed74f9be05"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.312802 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" podStartSLOduration=179.312786848 podStartE2EDuration="2m59.312786848s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.310846482 +0000 UTC m=+249.390337023" watchObservedRunningTime="2026-02-26 21:59:24.312786848 +0000 UTC m=+249.392277389" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.329906 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.331249 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:24.831229347 +0000 UTC m=+249.910719888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.360814 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" event={"ID":"651f9cbc-e905-462d-b42f-84d2a642169d","Type":"ContainerStarted","Data":"21e39c2d73e20fa7abea3c9149daccf49aa728fc598fe124c0c1db5b7d978945"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.394588 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" event={"ID":"0b3633c0-54b9-486c-a14b-99b6e5c04765","Type":"ContainerStarted","Data":"3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.405897 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.412812 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" event={"ID":"dbd9e8a9-2637-4ef5-b24e-fd2d08788451","Type":"ContainerStarted","Data":"cf131c846dc58f0f81deaff59e96ecde7e8060b4d4032a93708d21486c97361f"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.414889 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" event={"ID":"1616b423-1715-4def-8ed7-38a953361535","Type":"ContainerStarted","Data":"4e11ce4eb036fdc5ae0e4cc0a165a4ed747ca28954fd5ea4870e8e1ad029c34a"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.429576 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-sh8rh" podStartSLOduration=179.42956044 podStartE2EDuration="2m59.42956044s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.377704991 +0000 UTC m=+249.457195532" watchObservedRunningTime="2026-02-26 21:59:24.42956044 +0000 UTC m=+249.509050981" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.431838 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.432496 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:24.932480764 +0000 UTC m=+250.011971295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.437139 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" event={"ID":"214edff7-71c6-4f4c-b61a-5582ae5d49db","Type":"ContainerStarted","Data":"e61e4bacc04c407d56808a977771c5896cd9c7d045d9834966dedb9c6ca5babd"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.437198 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" event={"ID":"214edff7-71c6-4f4c-b61a-5582ae5d49db","Type":"ContainerStarted","Data":"1d15d3b151397a7df422e6af196de386d161b05b7778a351a4009de511eb7b73"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.439410 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" event={"ID":"a46ac2a0-b244-4742-ade1-cd57ce2e87d5","Type":"ContainerStarted","Data":"94851a756a5274a5c2756f4cd9d52d80de60263907f9cfed079a7e06404a5451"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.442355 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vhr7j" event={"ID":"cb80ebfa-1dd8-40e6-9d5e-27311836ccfb","Type":"ContainerStarted","Data":"73230c95a55f230655510299775a59b978406d92a94db90488e5aa73d113bd99"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.443672 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535718-4rxms" event={"ID":"e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05","Type":"ContainerStarted","Data":"4ba9d0fb28a061391a0d90ce26c9397b2c414e088d159b5529ce95aa48d1ad85"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.445931 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qd595" event={"ID":"ad057bd0-51b8-4b1a-b75a-c301e80942ab","Type":"ContainerStarted","Data":"ac1227df909c40eea57e6d9affb481382a8607a5f0e566492218a0be9b86457e"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.445948 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qd595" event={"ID":"ad057bd0-51b8-4b1a-b75a-c301e80942ab","Type":"ContainerStarted","Data":"c16e8d0dbea0939106be3a8225051e549587248195de5afb8a9d536ba421a6a8"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.451493 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" event={"ID":"845499bb-3eca-40f2-8146-1c28421bb2a5","Type":"ContainerStarted","Data":"19d1ecc18d6e959e684f3f3a362069620230fa40fd89c4176e19b6bc4ccd4cb0"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.452059 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.454502 4910 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4254t container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.454534 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" podUID="845499bb-3eca-40f2-8146-1c28421bb2a5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.455129 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" podStartSLOduration=179.455119653 podStartE2EDuration="2m59.455119653s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.452419957 +0000 UTC m=+249.531910498" watchObservedRunningTime="2026-02-26 21:59:24.455119653 +0000 UTC m=+249.534610184" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.463771 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" event={"ID":"ab54c730-c74c-4988-ac00-e926c9907435","Type":"ContainerStarted","Data":"7de83676a726a7d4b8f30dc4525753d0995b7d249af78e5a44cfdab9ef2baab9"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.478133 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" podStartSLOduration=179.478118154 podStartE2EDuration="2m59.478118154s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.477645181 +0000 UTC m=+249.557135712" watchObservedRunningTime="2026-02-26 21:59:24.478118154 +0000 UTC m=+249.557608695" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.487766 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vz6hr" event={"ID":"f2d7b80c-ea47-4fc1-a323-cab157b92d97","Type":"ContainerStarted","Data":"3aae0710537705d03bb742c73eaa71ced9a63a88cd0704827dce4e2371463f12"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.501275 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" event={"ID":"30057c46-9a0f-4a04-869a-c63eac9a84f6","Type":"ContainerStarted","Data":"9efe68261c5aaa7891ae4ab8bd29fbb4232c68f3bd95869c95538c3b71e50e67"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.503992 4910 generic.go:334] "Generic (PLEG): container finished" podID="4ea4294a-f91c-4d13-8372-b1e8b7a73831" containerID="536c876fca3c520f4647b905672d4a975972b2c886d9dc411c6ef19d12146779" exitCode=0 Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.504029 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" event={"ID":"4ea4294a-f91c-4d13-8372-b1e8b7a73831","Type":"ContainerDied","Data":"536c876fca3c520f4647b905672d4a975972b2c886d9dc411c6ef19d12146779"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.514464 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6222h" podStartSLOduration=180.514450117 podStartE2EDuration="3m0.514450117s" podCreationTimestamp="2026-02-26 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.512397417 +0000 UTC m=+249.591887958" watchObservedRunningTime="2026-02-26 21:59:24.514450117 +0000 UTC m=+249.593940658" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.515954 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" event={"ID":"40df2e2c-714e-4938-ba94-896961568c4b","Type":"ContainerStarted","Data":"b4ebfe8b5e322bc69da54951d551eba9f9901207ab81df64e173bc9835261df5"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.533733 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" event={"ID":"7bb9470b-f31b-4450-9a67-68e457292e83","Type":"ContainerStarted","Data":"261b1938f2a4b7fdfe95b3a1b10abbc36e633bfa0350e33334c1864ee6749a05"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.533816 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.535052 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.035030258 +0000 UTC m=+250.114520869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.559635 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" event={"ID":"9321ff73-5107-4139-ad6f-622b13de5cd1","Type":"ContainerStarted","Data":"2478c17cafde3160414f6357fd2dbac3e1b8104200d79cb733b79428f4d95414"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.580570 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" event={"ID":"26105f09-3245-4c15-ba60-54f31690d926","Type":"ContainerStarted","Data":"91a2d1da61ec561f02375c6ec9afaa8080c04b3264d6daa31bb8178d0b037d89"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.585511 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qd595" podStartSLOduration=6.585491986 podStartE2EDuration="6.585491986s" podCreationTimestamp="2026-02-26 21:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.566082709 +0000 UTC m=+249.645573250" watchObservedRunningTime="2026-02-26 21:59:24.585491986 +0000 UTC m=+249.664982527" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.640762 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.642830 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.643190 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-c7wmc" event={"ID":"17c9e844-630f-46d5-a08c-94a6d0b56404","Type":"ContainerStarted","Data":"8ae0a812d9939ea3b49b99a0fdadbbc89e89e9f0c19fb4c4fc8549cf271f5b92"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.645708 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.645989 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.145977772 +0000 UTC m=+250.225468313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.648036 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.649848 4910 patch_prober.go:28] interesting pod/console-operator-58897d9998-c7wmc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.649901 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-c7wmc" podUID="17c9e844-630f-46d5-a08c-94a6d0b56404" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.674727 4910 patch_prober.go:28] interesting pod/apiserver-76f77b778f-kz5fx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]log ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]etcd ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/max-in-flight-filter ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 26 21:59:24 crc kubenswrapper[4910]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 26 21:59:24 crc kubenswrapper[4910]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/project.openshift.io-projectcache ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-startinformers ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 26 21:59:24 crc kubenswrapper[4910]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 21:59:24 crc kubenswrapper[4910]: livez check failed Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.674798 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" podUID="1db40f1b-b714-4920-8b27-f350b3dd2978" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.674971 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" event={"ID":"ebb647d3-e0ce-4122-8582-d1a2d4d2c594","Type":"ContainerStarted","Data":"06afcd4f3f2a85ac27c73196dea97a9f11fda6820ae679b0b1edd93165fa9566"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.675028 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" event={"ID":"ebb647d3-e0ce-4122-8582-d1a2d4d2c594","Type":"ContainerStarted","Data":"bfd74ec593484a18a7ddc39eb3ce9b6055529e98bc631cd42cd4c3a7d00a92a8"} Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.677008 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl7fn" podStartSLOduration=179.676992293 podStartE2EDuration="2m59.676992293s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.674595784 +0000 UTC m=+249.754086325" watchObservedRunningTime="2026-02-26 21:59:24.676992293 +0000 UTC m=+249.756482834" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.678980 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-48hc6" podStartSLOduration=179.678973939 podStartE2EDuration="2m59.678973939s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.647256529 +0000 UTC m=+249.726747070" watchObservedRunningTime="2026-02-26 21:59:24.678973939 +0000 UTC m=+249.758464470" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.693746 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.713206 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7g2dk" podStartSLOduration=179.713187292 podStartE2EDuration="2m59.713187292s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.71207361 +0000 UTC m=+249.791564151" watchObservedRunningTime="2026-02-26 21:59:24.713187292 +0000 UTC m=+249.792677833" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.750881 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.753558 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.25353085 +0000 UTC m=+250.333021391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.754761 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.754805 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vz6hr" podStartSLOduration=6.754786495 podStartE2EDuration="6.754786495s" podCreationTimestamp="2026-02-26 21:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.750042669 +0000 UTC m=+249.829533210" watchObservedRunningTime="2026-02-26 21:59:24.754786495 +0000 UTC m=+249.834277026" Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.757212 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.257197085 +0000 UTC m=+250.336687626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.781627 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2fs85" podStartSLOduration=179.781607306 podStartE2EDuration="2m59.781607306s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.777197088 +0000 UTC m=+249.856687629" watchObservedRunningTime="2026-02-26 21:59:24.781607306 +0000 UTC m=+249.861097847" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.862268 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.862625 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.36260886 +0000 UTC m=+250.442099401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.862862 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-c7wmc" podStartSLOduration=179.862839708 podStartE2EDuration="2m59.862839708s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.832755074 +0000 UTC m=+249.912245615" watchObservedRunningTime="2026-02-26 21:59:24.862839708 +0000 UTC m=+249.942330259" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.864791 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kn2dp" podStartSLOduration=179.864781363 podStartE2EDuration="2m59.864781363s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:24.864121154 +0000 UTC m=+249.943611715" watchObservedRunningTime="2026-02-26 21:59:24.864781363 +0000 UTC m=+249.944271904" Feb 26 21:59:24 crc kubenswrapper[4910]: I0226 21:59:24.972323 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:24 crc kubenswrapper[4910]: E0226 21:59:24.972752 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.472736602 +0000 UTC m=+250.552227143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.073357 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.073530 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.573503394 +0000 UTC m=+250.652993935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.073752 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.074215 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.574148163 +0000 UTC m=+250.653638704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.174437 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.174613 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.674581376 +0000 UTC m=+250.754071917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.174671 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.175107 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.675089911 +0000 UTC m=+250.754580502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.275737 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.276067 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.776043769 +0000 UTC m=+250.855534310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.283247 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 21:59:25 crc kubenswrapper[4910]: [-]has-synced failed: reason withheld Feb 26 21:59:25 crc kubenswrapper[4910]: [+]process-running ok Feb 26 21:59:25 crc kubenswrapper[4910]: healthz check failed Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.283310 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.306676 4910 ???:1] "http: TLS handshake error from 192.168.126.11:55806: no serving certificate available for the kubelet" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.377322 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.377751 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.877735177 +0000 UTC m=+250.957225718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.399615 4910 ???:1] "http: TLS handshake error from 192.168.126.11:55818: no serving certificate available for the kubelet" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.478753 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.478995 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.978964944 +0000 UTC m=+251.058455485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.479147 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.479514 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:25.979493898 +0000 UTC m=+251.058984439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.517932 4910 ???:1] "http: TLS handshake error from 192.168.126.11:55830: no serving certificate available for the kubelet" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.580839 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.581129 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:26.081115256 +0000 UTC m=+251.160605797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.642277 4910 ???:1] "http: TLS handshake error from 192.168.126.11:55840: no serving certificate available for the kubelet" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.681366 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" event={"ID":"bd00464d-5f76-4abd-8c83-ce0821d00dfa","Type":"ContainerStarted","Data":"ec92d324a00515efc920bedd982b6387e00e397c1621d750d251587265fca73f"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.681891 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.682326 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:26.182311781 +0000 UTC m=+251.261802382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.688108 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" event={"ID":"99d66ca3-61ae-4752-9130-2cdeb226a2f0","Type":"ContainerStarted","Data":"cbfc40b36df170286a2463a4412c6fb26605ea5147aadb6c6c0b4ebae66b0032"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.688150 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" event={"ID":"99d66ca3-61ae-4752-9130-2cdeb226a2f0","Type":"ContainerStarted","Data":"aed855fc4040cc5c935960f97377c53b1967854f0f9bb67cd6063a7c3438c201"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.690261 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" event={"ID":"90cb6740-847f-435b-a38f-6a199cd2a41d","Type":"ContainerStarted","Data":"d287054e73beba65a72152d38e0ceb3d907a22522e57d5a8800a3ef9ba081a5e"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.692422 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" event={"ID":"7bb9470b-f31b-4450-9a67-68e457292e83","Type":"ContainerStarted","Data":"c8bd80bd153cd4a376e983412280a674ae3fe4ae1c71d653206a0f8baebabe0e"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.692452 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" event={"ID":"7bb9470b-f31b-4450-9a67-68e457292e83","Type":"ContainerStarted","Data":"52c8db48f0eff177999d2b8d42ae8267ce895d9cef86edaf7bc4603c04434c99"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.695204 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp" event={"ID":"09c3c040-a9ac-441c-a1ce-b7d67233579a","Type":"ContainerStarted","Data":"8238284c7f187e1c6b3380efb8d89538a08177a7ca14f7d3eb0002f4ee57b8cc"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.706743 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-svj47" podStartSLOduration=180.706730061 podStartE2EDuration="3m0.706730061s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:25.704663842 +0000 UTC m=+250.784154383" watchObservedRunningTime="2026-02-26 21:59:25.706730061 +0000 UTC m=+250.786220602" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.708114 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wzqgc" event={"ID":"2678a257-f356-4cad-9ad5-22c264a8810f","Type":"ContainerStarted","Data":"c8e6f8b367a259576e228a8f6d9ddc5f8a0f2d58c0f00c0cf411ea4a507d539a"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.708222 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.711305 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" event={"ID":"7e5e40d7-f505-4b8d-ac40-9677a7ebe781","Type":"ContainerStarted","Data":"aa5da9912e1eb02389471a019142cc1380ea6b460700959cad25cf6f0c3e449e"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.711515 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.717511 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" event={"ID":"1616b423-1715-4def-8ed7-38a953361535","Type":"ContainerStarted","Data":"fe5add191634de8269682457c1cad76522847aad8fbf51d121d433f2ead59a70"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.717571 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" event={"ID":"1616b423-1715-4def-8ed7-38a953361535","Type":"ContainerStarted","Data":"d59c03c920eae43202d87abeb718e158244464d4ddf5b9e52166a561f823565f"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.718312 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.727525 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.727576 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.727775 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" event={"ID":"a27cb800-961e-47ff-9558-47ec81e681a2","Type":"ContainerStarted","Data":"b3bc52a6fffd6c665dc47c577b88828c36bcf8a3186e5a1d9bc7254635794b20"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.729372 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" event={"ID":"1f4259a3-f3c2-4812-a4f8-6b5f206e9e00","Type":"ContainerStarted","Data":"f0703ba80b9e994f36a70f2ae4e2942725d2194133ac44b532a86b54127a8adc"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.730036 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.735034 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" event={"ID":"8f49a746-d000-4ff2-b5e5-928854e1c0e1","Type":"ContainerStarted","Data":"2e1edef7f8496b91006e7e127aa17a2bcb3b550091e94a97a8dbd328adf2e63b"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.735880 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.738388 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w89rp" podStartSLOduration=180.73837048 podStartE2EDuration="3m0.73837048s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:25.73735126 +0000 UTC m=+250.816841801" watchObservedRunningTime="2026-02-26 21:59:25.73837048 +0000 UTC m=+250.817861021" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.748381 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4qcps" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.751713 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" event={"ID":"40df2e2c-714e-4938-ba94-896961568c4b","Type":"ContainerStarted","Data":"9830faded6323fb95f2544c8d9528a2dbf099bbdcd3f3a13f303d5a313408d99"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.754620 4910 ???:1] "http: TLS handshake error from 192.168.126.11:55850: no serving certificate available for the kubelet" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.755434 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" event={"ID":"4e1ce726-92a4-4cc3-bb03-077da188f56d","Type":"ContainerStarted","Data":"dca129a598fc31603af02ce89fa82ca096d11e7bfbe60f96730f7bc22c439067"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.758802 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" event={"ID":"dbd9e8a9-2637-4ef5-b24e-fd2d08788451","Type":"ContainerStarted","Data":"fdb8b0a63263575a5c0facbd5e8ecd233b1e0fc8e32423534e4cbcc0f407e36e"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.759565 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.764599 4910 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-q2jtw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.764648 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.774722 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" event={"ID":"51587e5b-a1ef-4fa1-bdce-fd9b96859790","Type":"ContainerStarted","Data":"1a9a4c4efbf15fff355ccab29f0197c95eb691a6219f78eaf752a178a225b4c5"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.774764 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" event={"ID":"51587e5b-a1ef-4fa1-bdce-fd9b96859790","Type":"ContainerStarted","Data":"0c2adfc64d25054a8188e42c00eefa628edd014b137e0642ad15999ca44c9173"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.777310 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9nkmk" podStartSLOduration=180.777298827 podStartE2EDuration="3m0.777298827s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:25.775596299 +0000 UTC m=+250.855086840" watchObservedRunningTime="2026-02-26 21:59:25.777298827 +0000 UTC m=+250.856789368" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.782659 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.783854 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:26.283834274 +0000 UTC m=+251.363324825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.810820 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jgdt5" event={"ID":"214edff7-71c6-4f4c-b61a-5582ae5d49db","Type":"ContainerStarted","Data":"09049d1c0648a28a097c2ee7edf1c21e0b6f96daaad9d206bbbbee3c0f0f5ffe"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.841707 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" event={"ID":"e17437b5-ba61-4630-83bd-8436fcbd659f","Type":"ContainerStarted","Data":"aa2749381068e81de07ab37b39cc64029eea43a92baf9050bae50aa981b7eb1c"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.865826 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" podStartSLOduration=180.865811288 podStartE2EDuration="3m0.865811288s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:25.865597122 +0000 UTC m=+250.945087673" watchObservedRunningTime="2026-02-26 21:59:25.865811288 +0000 UTC m=+250.945301829" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.866250 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g8lnj" podStartSLOduration=180.86624561 podStartE2EDuration="3m0.86624561s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:25.829960628 +0000 UTC m=+250.909451169" watchObservedRunningTime="2026-02-26 21:59:25.86624561 +0000 UTC m=+250.945736151" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.869246 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" event={"ID":"845499bb-3eca-40f2-8146-1c28421bb2a5","Type":"ContainerStarted","Data":"4458489061d9cdc5b46996b4ab6639064bcd47c38d31602b414f0f6da1b5f775"} Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.884848 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.886639 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" event={"ID":"4ea4294a-f91c-4d13-8372-b1e8b7a73831","Type":"ContainerStarted","Data":"8f3b7cf7c23dd7b8fae1e410f21874ce0b9c07eecc9f66eb265b75ab3e30e286"} Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.889631 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:26.389616771 +0000 UTC m=+251.469107312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.897958 4910 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hscq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.898016 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hscq" podUID="7843f81a-d6bd-463f-b5b7-454e3f943ed8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.927671 4910 ???:1] "http: TLS handshake error from 192.168.126.11:55866: no serving certificate available for the kubelet" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.941307 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-c7wmc" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.960731 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8n798" podStartSLOduration=180.960717142 podStartE2EDuration="3m0.960717142s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:25.912855269 +0000 UTC m=+250.992345810" watchObservedRunningTime="2026-02-26 21:59:25.960717142 +0000 UTC m=+251.040207683" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.961991 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" podStartSLOduration=180.961986809 podStartE2EDuration="3m0.961986809s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:25.959554789 +0000 UTC m=+251.039045340" watchObservedRunningTime="2026-02-26 21:59:25.961986809 +0000 UTC m=+251.041477350" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.965269 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.965610 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.967403 4910 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2jnr5 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.967442 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" podUID="4ea4294a-f91c-4d13-8372-b1e8b7a73831" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.986964 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" podStartSLOduration=180.986948025 podStartE2EDuration="3m0.986948025s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:25.97947303 +0000 UTC m=+251.058963571" watchObservedRunningTime="2026-02-26 21:59:25.986948025 +0000 UTC m=+251.066438566" Feb 26 21:59:25 crc kubenswrapper[4910]: I0226 21:59:25.991704 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:25 crc kubenswrapper[4910]: E0226 21:59:25.992561 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:26.492534435 +0000 UTC m=+251.572024976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.033389 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-q68m5" podStartSLOduration=181.033374328 podStartE2EDuration="3m1.033374328s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:26.012491159 +0000 UTC m=+251.091981700" watchObservedRunningTime="2026-02-26 21:59:26.033374328 +0000 UTC m=+251.112864869" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.059742 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lpzjm" podStartSLOduration=181.059718734 podStartE2EDuration="3m1.059718734s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:26.034186371 +0000 UTC m=+251.113676912" watchObservedRunningTime="2026-02-26 21:59:26.059718734 +0000 UTC m=+251.139209275" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.089660 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cq6r" podStartSLOduration=181.089647393 podStartE2EDuration="3m1.089647393s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:26.088364316 +0000 UTC m=+251.167854857" watchObservedRunningTime="2026-02-26 21:59:26.089647393 +0000 UTC m=+251.169137934" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.089836 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdbcq" podStartSLOduration=181.089833318 podStartE2EDuration="3m1.089833318s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:26.06026625 +0000 UTC m=+251.139756801" watchObservedRunningTime="2026-02-26 21:59:26.089833318 +0000 UTC m=+251.169323859" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.094994 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.095323 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:26.595311526 +0000 UTC m=+251.674802067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.117085 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wzqgc" podStartSLOduration=8.11707021 podStartE2EDuration="8.11707021s" podCreationTimestamp="2026-02-26 21:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:26.115227557 +0000 UTC m=+251.194718108" watchObservedRunningTime="2026-02-26 21:59:26.11707021 +0000 UTC m=+251.196560751" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.155509 4910 ???:1] "http: TLS handshake error from 192.168.126.11:55880: no serving certificate available for the kubelet" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.170243 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.182405 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4254t" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.196648 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.197036 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:26.697020916 +0000 UTC m=+251.776511457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.285893 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 21:59:26 crc kubenswrapper[4910]: [-]has-synced failed: reason withheld Feb 26 21:59:26 crc kubenswrapper[4910]: [+]process-running ok Feb 26 21:59:26 crc kubenswrapper[4910]: healthz check failed Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.285940 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.297800 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.298457 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:26.798442357 +0000 UTC m=+251.877932898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.318573 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" podStartSLOduration=182.318555914 podStartE2EDuration="3m2.318555914s" podCreationTimestamp="2026-02-26 21:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:26.285945938 +0000 UTC m=+251.365436479" watchObservedRunningTime="2026-02-26 21:59:26.318555914 +0000 UTC m=+251.398046455" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.320658 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" podStartSLOduration=181.320651804 podStartE2EDuration="3m1.320651804s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:26.317955227 +0000 UTC m=+251.397445768" watchObservedRunningTime="2026-02-26 21:59:26.320651804 +0000 UTC m=+251.400142345" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.402751 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.403136 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:26.903121072 +0000 UTC m=+251.982611613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.505934 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.506593 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.006579552 +0000 UTC m=+252.086070093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.524917 4910 ???:1] "http: TLS handshake error from 192.168.126.11:55886: no serving certificate available for the kubelet" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.606906 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.607318 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.107298723 +0000 UTC m=+252.186789264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.708496 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.708835 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.208823287 +0000 UTC m=+252.288313828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.809397 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.809582 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.309556668 +0000 UTC m=+252.389047209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.811342 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.811721 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.311713511 +0000 UTC m=+252.391204052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.828183 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bw8qw"] Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.828471 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" podUID="dbbce4f0-e239-41ed-98b6-b5b84a303b34" containerName="controller-manager" containerID="cri-o://6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef" gracePeriod=30 Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.836695 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2"] Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.896493 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" event={"ID":"90cb6740-847f-435b-a38f-6a199cd2a41d","Type":"ContainerStarted","Data":"bad5b84670e858b68a9a310312724da3a98af1c747fd44361e02561075070a3e"} Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.899872 4910 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-q2jtw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.899990 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.899932 4910 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hscq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.900154 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hscq" podUID="7843f81a-d6bd-463f-b5b7-454e3f943ed8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 21:59:26 crc kubenswrapper[4910]: I0226 21:59:26.913496 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:26 crc kubenswrapper[4910]: E0226 21:59:26.913922 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.413904854 +0000 UTC m=+252.493395395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.015153 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.021137 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.521121281 +0000 UTC m=+252.600611942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.064150 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6rxsg"] Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.074343 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.081515 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.084718 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rxsg"] Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.103504 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode17437b5_ba61_4630_83bd_8436fcbd659f.slice/crio-aa2749381068e81de07ab37b39cc64029eea43a92baf9050bae50aa981b7eb1c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode17437b5_ba61_4630_83bd_8436fcbd659f.slice/crio-conmon-aa2749381068e81de07ab37b39cc64029eea43a92baf9050bae50aa981b7eb1c.scope\": RecentStats: unable to find data in memory cache]" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.116888 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.117046 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-utilities\") pod \"certified-operators-6rxsg\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.117097 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-catalog-content\") pod \"certified-operators-6rxsg\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.117173 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlvvj\" (UniqueName: \"kubernetes.io/projected/4333e88f-8502-46f4-9639-7af62ff1e63c-kube-api-access-tlvvj\") pod \"certified-operators-6rxsg\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.117278 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.617261861 +0000 UTC m=+252.696752402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.213506 4910 ???:1] "http: TLS handshake error from 192.168.126.11:50728: no serving certificate available for the kubelet" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.214227 4910 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.219666 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-utilities\") pod \"certified-operators-6rxsg\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.219726 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-catalog-content\") pod \"certified-operators-6rxsg\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.219791 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlvvj\" (UniqueName: \"kubernetes.io/projected/4333e88f-8502-46f4-9639-7af62ff1e63c-kube-api-access-tlvvj\") pod \"certified-operators-6rxsg\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.219817 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.220070 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.720057621 +0000 UTC m=+252.799548162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.220601 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-utilities\") pod \"certified-operators-6rxsg\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.220791 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-catalog-content\") pod \"certified-operators-6rxsg\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.229881 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ggtxj"] Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.230902 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.243659 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ggtxj"] Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.243841 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.293495 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlvvj\" (UniqueName: \"kubernetes.io/projected/4333e88f-8502-46f4-9639-7af62ff1e63c-kube-api-access-tlvvj\") pod \"certified-operators-6rxsg\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.299921 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 21:59:27 crc kubenswrapper[4910]: [-]has-synced failed: reason withheld Feb 26 21:59:27 crc kubenswrapper[4910]: [+]process-running ok Feb 26 21:59:27 crc kubenswrapper[4910]: healthz check failed Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.299973 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.321466 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.329548 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-utilities\") pod \"community-operators-ggtxj\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.329616 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-catalog-content\") pod \"community-operators-ggtxj\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.329661 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqdw\" (UniqueName: \"kubernetes.io/projected/a8d202d1-b4f6-4bc1-b633-56ba90788979-kube-api-access-bvqdw\") pod \"community-operators-ggtxj\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.330249 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.830234144 +0000 UTC m=+252.909724685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.401437 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.407639 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.426358 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2575w"] Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.426546 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbce4f0-e239-41ed-98b6-b5b84a303b34" containerName="controller-manager" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.426560 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbce4f0-e239-41ed-98b6-b5b84a303b34" containerName="controller-manager" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.426640 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbce4f0-e239-41ed-98b6-b5b84a303b34" containerName="controller-manager" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.427337 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.432796 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.432829 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-utilities\") pod \"community-operators-ggtxj\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.432859 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-catalog-content\") pod \"community-operators-ggtxj\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.432882 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqdw\" (UniqueName: \"kubernetes.io/projected/a8d202d1-b4f6-4bc1-b633-56ba90788979-kube-api-access-bvqdw\") pod \"community-operators-ggtxj\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.433373 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:27.933363095 +0000 UTC m=+253.012853636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.433796 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-utilities\") pod \"community-operators-ggtxj\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.434007 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-catalog-content\") pod \"community-operators-ggtxj\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.453340 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2575w"] Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.454550 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqdw\" (UniqueName: \"kubernetes.io/projected/a8d202d1-b4f6-4bc1-b633-56ba90788979-kube-api-access-bvqdw\") pod \"community-operators-ggtxj\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.535220 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-config\") pod \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.535642 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grtc8\" (UniqueName: \"kubernetes.io/projected/dbbce4f0-e239-41ed-98b6-b5b84a303b34-kube-api-access-grtc8\") pod \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.535739 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-client-ca\") pod \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.535784 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-proxy-ca-bundles\") pod \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.536064 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.536093 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbce4f0-e239-41ed-98b6-b5b84a303b34-serving-cert\") pod \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\" (UID: \"dbbce4f0-e239-41ed-98b6-b5b84a303b34\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.536323 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-catalog-content\") pod \"certified-operators-2575w\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.536148 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-config" (OuterVolumeSpecName: "config") pod "dbbce4f0-e239-41ed-98b6-b5b84a303b34" (UID: "dbbce4f0-e239-41ed-98b6-b5b84a303b34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.536303 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-client-ca" (OuterVolumeSpecName: "client-ca") pod "dbbce4f0-e239-41ed-98b6-b5b84a303b34" (UID: "dbbce4f0-e239-41ed-98b6-b5b84a303b34"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.536395 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:28.036376492 +0000 UTC m=+253.115867033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.536485 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xs9j\" (UniqueName: \"kubernetes.io/projected/f51dcf73-fbc7-4a90-849c-448ed9e540f9-kube-api-access-5xs9j\") pod \"certified-operators-2575w\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.536549 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.536647 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-utilities\") pod \"certified-operators-2575w\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.536553 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dbbce4f0-e239-41ed-98b6-b5b84a303b34" (UID: "dbbce4f0-e239-41ed-98b6-b5b84a303b34"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.536833 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:28.036820664 +0000 UTC m=+253.116311205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.537840 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.537885 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.537895 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbbce4f0-e239-41ed-98b6-b5b84a303b34-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.544990 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbce4f0-e239-41ed-98b6-b5b84a303b34-kube-api-access-grtc8" (OuterVolumeSpecName: "kube-api-access-grtc8") pod "dbbce4f0-e239-41ed-98b6-b5b84a303b34" (UID: "dbbce4f0-e239-41ed-98b6-b5b84a303b34"). InnerVolumeSpecName "kube-api-access-grtc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.545400 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbce4f0-e239-41ed-98b6-b5b84a303b34-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dbbce4f0-e239-41ed-98b6-b5b84a303b34" (UID: "dbbce4f0-e239-41ed-98b6-b5b84a303b34"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.573559 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggtxj" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.634981 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hvjfk"] Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.636115 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.639919 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.640060 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-catalog-content\") pod \"certified-operators-2575w\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.640099 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xs9j\" (UniqueName: \"kubernetes.io/projected/f51dcf73-fbc7-4a90-849c-448ed9e540f9-kube-api-access-5xs9j\") pod \"certified-operators-2575w\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.640146 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-utilities\") pod \"certified-operators-2575w\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.640208 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grtc8\" (UniqueName: \"kubernetes.io/projected/dbbce4f0-e239-41ed-98b6-b5b84a303b34-kube-api-access-grtc8\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.640219 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbbce4f0-e239-41ed-98b6-b5b84a303b34-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.640862 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-utilities\") pod \"certified-operators-2575w\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.640884 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:28.140865571 +0000 UTC m=+253.220356112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.642319 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-catalog-content\") pod \"certified-operators-2575w\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.651542 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvjfk"] Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.668208 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xs9j\" (UniqueName: \"kubernetes.io/projected/f51dcf73-fbc7-4a90-849c-448ed9e540f9-kube-api-access-5xs9j\") pod \"certified-operators-2575w\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.717514 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rxsg"] Feb 26 21:59:27 crc kubenswrapper[4910]: W0226 21:59:27.733714 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4333e88f_8502_46f4_9639_7af62ff1e63c.slice/crio-c5f2d869fc8976ebbe3a0b3ede784c971dc046527d9108e7b49129e7c237f407 WatchSource:0}: Error finding container c5f2d869fc8976ebbe3a0b3ede784c971dc046527d9108e7b49129e7c237f407: Status 404 returned error can't find the container with id c5f2d869fc8976ebbe3a0b3ede784c971dc046527d9108e7b49129e7c237f407 Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.741214 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.741263 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-catalog-content\") pod \"community-operators-hvjfk\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.741287 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv27g\" (UniqueName: \"kubernetes.io/projected/84c1eb30-f57d-4387-bc3f-deae490cdc42-kube-api-access-rv27g\") pod \"community-operators-hvjfk\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.741315 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-utilities\") pod \"community-operators-hvjfk\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.741605 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:28.241593843 +0000 UTC m=+253.321084384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.758632 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2575w" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.829573 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ggtxj"] Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.845376 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.845745 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-catalog-content\") pod \"community-operators-hvjfk\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.845809 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv27g\" (UniqueName: \"kubernetes.io/projected/84c1eb30-f57d-4387-bc3f-deae490cdc42-kube-api-access-rv27g\") pod \"community-operators-hvjfk\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.845850 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-utilities\") pod \"community-operators-hvjfk\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.847667 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:28.347638066 +0000 UTC m=+253.427128607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.847703 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-catalog-content\") pod \"community-operators-hvjfk\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.851150 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-utilities\") pod \"community-operators-hvjfk\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.866936 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv27g\" (UniqueName: \"kubernetes.io/projected/84c1eb30-f57d-4387-bc3f-deae490cdc42-kube-api-access-rv27g\") pod \"community-operators-hvjfk\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.913361 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" event={"ID":"90cb6740-847f-435b-a38f-6a199cd2a41d","Type":"ContainerStarted","Data":"af4f21eeb16e283d08a4194aa5e1ea1373cf9c743a7900ca1ece05452a9b1f86"} Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.913675 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" event={"ID":"90cb6740-847f-435b-a38f-6a199cd2a41d","Type":"ContainerStarted","Data":"bc4aa978e7e9cf84dcf89f668b2f2ea55dcb76fd2a6021df5f78a49205bfc0dd"} Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.918073 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.918121 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" event={"ID":"dbbce4f0-e239-41ed-98b6-b5b84a303b34","Type":"ContainerDied","Data":"6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef"} Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.918193 4910 scope.go:117] "RemoveContainer" containerID="6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.918019 4910 generic.go:334] "Generic (PLEG): container finished" podID="dbbce4f0-e239-41ed-98b6-b5b84a303b34" containerID="6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef" exitCode=0 Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.918302 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bw8qw" event={"ID":"dbbce4f0-e239-41ed-98b6-b5b84a303b34","Type":"ContainerDied","Data":"0cb0209380424b9c8394495836b187579edfa85d74d753e293ad27720b70bde2"} Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.922222 4910 generic.go:334] "Generic (PLEG): container finished" podID="e17437b5-ba61-4630-83bd-8436fcbd659f" containerID="aa2749381068e81de07ab37b39cc64029eea43a92baf9050bae50aa981b7eb1c" exitCode=0 Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.922309 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" event={"ID":"e17437b5-ba61-4630-83bd-8436fcbd659f","Type":"ContainerDied","Data":"aa2749381068e81de07ab37b39cc64029eea43a92baf9050bae50aa981b7eb1c"} Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.930619 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggtxj" event={"ID":"a8d202d1-b4f6-4bc1-b633-56ba90788979","Type":"ContainerStarted","Data":"df4d75697105f38d5778cc96e33ec269c0e1101d239d45c1a3b012e1dd0d71ab"} Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.934698 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8h9hc" podStartSLOduration=9.934686566 podStartE2EDuration="9.934686566s" podCreationTimestamp="2026-02-26 21:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:27.930989699 +0000 UTC m=+253.010480240" watchObservedRunningTime="2026-02-26 21:59:27.934686566 +0000 UTC m=+253.014177107" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.943006 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rxsg" event={"ID":"4333e88f-8502-46f4-9639-7af62ff1e63c","Type":"ContainerStarted","Data":"c5f2d869fc8976ebbe3a0b3ede784c971dc046527d9108e7b49129e7c237f407"} Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.944037 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" podUID="63954f23-8000-4ada-8d5d-67297b7c26f6" containerName="route-controller-manager" containerID="cri-o://d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c" gracePeriod=30 Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.948700 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.949006 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.949614 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:28.449601143 +0000 UTC m=+253.529091684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.959831 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvjfk" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.963234 4910 scope.go:117] "RemoveContainer" containerID="6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef" Feb 26 21:59:27 crc kubenswrapper[4910]: E0226 21:59:27.963858 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef\": container with ID starting with 6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef not found: ID does not exist" containerID="6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.963895 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef"} err="failed to get container status \"6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef\": rpc error: code = NotFound desc = could not find container \"6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef\": container with ID starting with 6051b30d93b2851ee01c32ce89025a575a2b9b827036d07e737817e2806705ef not found: ID does not exist" Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.976255 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2575w"] Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.989231 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bw8qw"] Feb 26 21:59:27 crc kubenswrapper[4910]: I0226 21:59:27.998487 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bw8qw"] Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.052841 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:28 crc kubenswrapper[4910]: E0226 21:59:28.053013 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 21:59:28.552987591 +0000 UTC m=+253.632478132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.053573 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:28 crc kubenswrapper[4910]: E0226 21:59:28.054271 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 21:59:28.554263198 +0000 UTC m=+253.633753739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-298fw" (UID: "b050f320-6f26-4c79-88cc-ceb481369169") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 21:59:28 crc kubenswrapper[4910]: W0226 21:59:28.099783 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51dcf73_fbc7_4a90_849c_448ed9e540f9.slice/crio-16fdf5e6405c773cd198c50c65615aacf557b7570fde2d8c6630cc6334a4ee03 WatchSource:0}: Error finding container 16fdf5e6405c773cd198c50c65615aacf557b7570fde2d8c6630cc6334a4ee03: Status 404 returned error can't find the container with id 16fdf5e6405c773cd198c50c65615aacf557b7570fde2d8c6630cc6334a4ee03 Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.155052 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.159718 4910 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-26T21:59:27.214254775Z","Handler":null,"Name":""} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.164335 4910 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.164421 4910 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.173001 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvjfk"] Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.174930 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 21:59:28 crc kubenswrapper[4910]: W0226 21:59:28.192101 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84c1eb30_f57d_4387_bc3f_deae490cdc42.slice/crio-c23e35e9beec3bf1cb1c7131e3ac24cf78ba9e46208494904a344d9dcccb71f8 WatchSource:0}: Error finding container c23e35e9beec3bf1cb1c7131e3ac24cf78ba9e46208494904a344d9dcccb71f8: Status 404 returned error can't find the container with id c23e35e9beec3bf1cb1c7131e3ac24cf78ba9e46208494904a344d9dcccb71f8 Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.259016 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.264184 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.264231 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.281389 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 21:59:28 crc kubenswrapper[4910]: [-]has-synced failed: reason withheld Feb 26 21:59:28 crc kubenswrapper[4910]: [+]process-running ok Feb 26 21:59:28 crc kubenswrapper[4910]: healthz check failed Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.281456 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.286855 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-298fw\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.346905 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.347582 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-fc6d686bb-gzxn6"] Feb 26 21:59:28 crc kubenswrapper[4910]: E0226 21:59:28.347903 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63954f23-8000-4ada-8d5d-67297b7c26f6" containerName="route-controller-manager" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.347924 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="63954f23-8000-4ada-8d5d-67297b7c26f6" containerName="route-controller-manager" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.348105 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="63954f23-8000-4ada-8d5d-67297b7c26f6" containerName="route-controller-manager" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.350841 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.352372 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fc6d686bb-gzxn6"] Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.353389 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.354400 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.354454 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.355737 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.357207 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.364650 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.366828 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.391422 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.461889 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-client-ca\") pod \"63954f23-8000-4ada-8d5d-67297b7c26f6\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462015 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-config\") pod \"63954f23-8000-4ada-8d5d-67297b7c26f6\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462089 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63954f23-8000-4ada-8d5d-67297b7c26f6-serving-cert\") pod \"63954f23-8000-4ada-8d5d-67297b7c26f6\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462180 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65br6\" (UniqueName: \"kubernetes.io/projected/63954f23-8000-4ada-8d5d-67297b7c26f6-kube-api-access-65br6\") pod \"63954f23-8000-4ada-8d5d-67297b7c26f6\" (UID: \"63954f23-8000-4ada-8d5d-67297b7c26f6\") " Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462452 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-config\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462495 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-proxy-ca-bundles\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462577 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9wqd\" (UniqueName: \"kubernetes.io/projected/0f19e4dc-70a6-4ba2-bf17-6092288974a3-kube-api-access-t9wqd\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462650 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-client-ca\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462684 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "63954f23-8000-4ada-8d5d-67297b7c26f6" (UID: "63954f23-8000-4ada-8d5d-67297b7c26f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462887 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f19e4dc-70a6-4ba2-bf17-6092288974a3-serving-cert\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.462972 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.463536 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-config" (OuterVolumeSpecName: "config") pod "63954f23-8000-4ada-8d5d-67297b7c26f6" (UID: "63954f23-8000-4ada-8d5d-67297b7c26f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.467979 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63954f23-8000-4ada-8d5d-67297b7c26f6-kube-api-access-65br6" (OuterVolumeSpecName: "kube-api-access-65br6") pod "63954f23-8000-4ada-8d5d-67297b7c26f6" (UID: "63954f23-8000-4ada-8d5d-67297b7c26f6"). InnerVolumeSpecName "kube-api-access-65br6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.468371 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63954f23-8000-4ada-8d5d-67297b7c26f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63954f23-8000-4ada-8d5d-67297b7c26f6" (UID: "63954f23-8000-4ada-8d5d-67297b7c26f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.520824 4910 ???:1] "http: TLS handshake error from 192.168.126.11:50736: no serving certificate available for the kubelet" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.565320 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-config\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.565367 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-proxy-ca-bundles\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.565408 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9wqd\" (UniqueName: \"kubernetes.io/projected/0f19e4dc-70a6-4ba2-bf17-6092288974a3-kube-api-access-t9wqd\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.565452 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-client-ca\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.565501 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f19e4dc-70a6-4ba2-bf17-6092288974a3-serving-cert\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.565543 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65br6\" (UniqueName: \"kubernetes.io/projected/63954f23-8000-4ada-8d5d-67297b7c26f6-kube-api-access-65br6\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.565555 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63954f23-8000-4ada-8d5d-67297b7c26f6-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.565564 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63954f23-8000-4ada-8d5d-67297b7c26f6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.566623 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-config\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.567642 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-proxy-ca-bundles\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.568700 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-client-ca\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.573109 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f19e4dc-70a6-4ba2-bf17-6092288974a3-serving-cert\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.580272 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9wqd\" (UniqueName: \"kubernetes.io/projected/0f19e4dc-70a6-4ba2-bf17-6092288974a3-kube-api-access-t9wqd\") pod \"controller-manager-fc6d686bb-gzxn6\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.671882 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.818479 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-298fw"] Feb 26 21:59:28 crc kubenswrapper[4910]: W0226 21:59:28.835616 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb050f320_6f26_4c79_88cc_ceb481369169.slice/crio-8234023c0abae29e4e9fa47f6f2c2ceefa0fce54555f1c0cd57f5ac166c85d92 WatchSource:0}: Error finding container 8234023c0abae29e4e9fa47f6f2c2ceefa0fce54555f1c0cd57f5ac166c85d92: Status 404 returned error can't find the container with id 8234023c0abae29e4e9fa47f6f2c2ceefa0fce54555f1c0cd57f5ac166c85d92 Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.867998 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fc6d686bb-gzxn6"] Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.950848 4910 generic.go:334] "Generic (PLEG): container finished" podID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerID="efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2" exitCode=0 Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.950902 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rxsg" event={"ID":"4333e88f-8502-46f4-9639-7af62ff1e63c","Type":"ContainerDied","Data":"efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.952119 4910 generic.go:334] "Generic (PLEG): container finished" podID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerID="c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037" exitCode=0 Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.952198 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvjfk" event={"ID":"84c1eb30-f57d-4387-bc3f-deae490cdc42","Type":"ContainerDied","Data":"c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.952222 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvjfk" event={"ID":"84c1eb30-f57d-4387-bc3f-deae490cdc42","Type":"ContainerStarted","Data":"c23e35e9beec3bf1cb1c7131e3ac24cf78ba9e46208494904a344d9dcccb71f8"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.959838 4910 generic.go:334] "Generic (PLEG): container finished" podID="63954f23-8000-4ada-8d5d-67297b7c26f6" containerID="d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c" exitCode=0 Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.959941 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" event={"ID":"63954f23-8000-4ada-8d5d-67297b7c26f6","Type":"ContainerDied","Data":"d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.959987 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" event={"ID":"63954f23-8000-4ada-8d5d-67297b7c26f6","Type":"ContainerDied","Data":"cf9d868c1317197717720d82a88add65c9f7966a288630fdb7f8fdcbb550e1f4"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.960007 4910 scope.go:117] "RemoveContainer" containerID="d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.961749 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2" Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.961877 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" event={"ID":"0f19e4dc-70a6-4ba2-bf17-6092288974a3","Type":"ContainerStarted","Data":"35a3fcf35cf574c43852a3926e3aa9b5c8786ef17859e34d90d35ca6af3f10f0"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.963509 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" event={"ID":"b050f320-6f26-4c79-88cc-ceb481369169","Type":"ContainerStarted","Data":"8234023c0abae29e4e9fa47f6f2c2ceefa0fce54555f1c0cd57f5ac166c85d92"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.978588 4910 generic.go:334] "Generic (PLEG): container finished" podID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerID="1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d" exitCode=0 Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.978848 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2575w" event={"ID":"f51dcf73-fbc7-4a90-849c-448ed9e540f9","Type":"ContainerDied","Data":"1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.978892 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2575w" event={"ID":"f51dcf73-fbc7-4a90-849c-448ed9e540f9","Type":"ContainerStarted","Data":"16fdf5e6405c773cd198c50c65615aacf557b7570fde2d8c6630cc6334a4ee03"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.987689 4910 generic.go:334] "Generic (PLEG): container finished" podID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerID="939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213" exitCode=0 Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.987810 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggtxj" event={"ID":"a8d202d1-b4f6-4bc1-b633-56ba90788979","Type":"ContainerDied","Data":"939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213"} Feb 26 21:59:28 crc kubenswrapper[4910]: I0226 21:59:28.999681 4910 scope.go:117] "RemoveContainer" containerID="d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c" Feb 26 21:59:29 crc kubenswrapper[4910]: E0226 21:59:29.000378 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c\": container with ID starting with d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c not found: ID does not exist" containerID="d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.000638 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c"} err="failed to get container status \"d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c\": rpc error: code = NotFound desc = could not find container \"d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c\": container with ID starting with d05352fe773765c6c5d2e36143667134c4939011bdf4585cfc4a810a7ca9634c not found: ID does not exist" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.019063 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dp8sv"] Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.019978 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.021521 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.034981 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp8sv"] Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.066980 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2"] Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.070150 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrpl2"] Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.087824 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcjx\" (UniqueName: \"kubernetes.io/projected/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-kube-api-access-tlcjx\") pod \"redhat-marketplace-dp8sv\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.087953 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-catalog-content\") pod \"redhat-marketplace-dp8sv\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.087981 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-utilities\") pod \"redhat-marketplace-dp8sv\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.191860 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlcjx\" (UniqueName: \"kubernetes.io/projected/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-kube-api-access-tlcjx\") pod \"redhat-marketplace-dp8sv\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.192344 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-catalog-content\") pod \"redhat-marketplace-dp8sv\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.192366 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-utilities\") pod \"redhat-marketplace-dp8sv\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.193071 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-catalog-content\") pod \"redhat-marketplace-dp8sv\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.193575 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-utilities\") pod \"redhat-marketplace-dp8sv\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.208995 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.217194 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlcjx\" (UniqueName: \"kubernetes.io/projected/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-kube-api-access-tlcjx\") pod \"redhat-marketplace-dp8sv\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.281311 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 21:59:29 crc kubenswrapper[4910]: [-]has-synced failed: reason withheld Feb 26 21:59:29 crc kubenswrapper[4910]: [+]process-running ok Feb 26 21:59:29 crc kubenswrapper[4910]: healthz check failed Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.281359 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.293177 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17437b5-ba61-4630-83bd-8436fcbd659f-config-volume\") pod \"e17437b5-ba61-4630-83bd-8436fcbd659f\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.293224 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17437b5-ba61-4630-83bd-8436fcbd659f-secret-volume\") pod \"e17437b5-ba61-4630-83bd-8436fcbd659f\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.293245 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zct4x\" (UniqueName: \"kubernetes.io/projected/e17437b5-ba61-4630-83bd-8436fcbd659f-kube-api-access-zct4x\") pod \"e17437b5-ba61-4630-83bd-8436fcbd659f\" (UID: \"e17437b5-ba61-4630-83bd-8436fcbd659f\") " Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.294522 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17437b5-ba61-4630-83bd-8436fcbd659f-config-volume" (OuterVolumeSpecName: "config-volume") pod "e17437b5-ba61-4630-83bd-8436fcbd659f" (UID: "e17437b5-ba61-4630-83bd-8436fcbd659f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.300227 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17437b5-ba61-4630-83bd-8436fcbd659f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e17437b5-ba61-4630-83bd-8436fcbd659f" (UID: "e17437b5-ba61-4630-83bd-8436fcbd659f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.300321 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17437b5-ba61-4630-83bd-8436fcbd659f-kube-api-access-zct4x" (OuterVolumeSpecName: "kube-api-access-zct4x") pod "e17437b5-ba61-4630-83bd-8436fcbd659f" (UID: "e17437b5-ba61-4630-83bd-8436fcbd659f"). InnerVolumeSpecName "kube-api-access-zct4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.341791 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.395121 4910 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17437b5-ba61-4630-83bd-8436fcbd659f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.395172 4910 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17437b5-ba61-4630-83bd-8436fcbd659f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.395182 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zct4x\" (UniqueName: \"kubernetes.io/projected/e17437b5-ba61-4630-83bd-8436fcbd659f-kube-api-access-zct4x\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.432682 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5k6t"] Feb 26 21:59:29 crc kubenswrapper[4910]: E0226 21:59:29.433082 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17437b5-ba61-4630-83bd-8436fcbd659f" containerName="collect-profiles" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.433094 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17437b5-ba61-4630-83bd-8436fcbd659f" containerName="collect-profiles" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.433201 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17437b5-ba61-4630-83bd-8436fcbd659f" containerName="collect-profiles" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.433831 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.447379 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5k6t"] Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.565923 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp8sv"] Feb 26 21:59:29 crc kubenswrapper[4910]: W0226 21:59:29.586326 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0c0a0be_62f6_4642_aeab_2f08a5cffedb.slice/crio-190601bb9091349b4dc597d5df3505f0584b6e281634b1c05905e407d7b88037 WatchSource:0}: Error finding container 190601bb9091349b4dc597d5df3505f0584b6e281634b1c05905e407d7b88037: Status 404 returned error can't find the container with id 190601bb9091349b4dc597d5df3505f0584b6e281634b1c05905e407d7b88037 Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.596911 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-utilities\") pod \"redhat-marketplace-s5k6t\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.596967 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-catalog-content\") pod \"redhat-marketplace-s5k6t\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.596990 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwt7v\" (UniqueName: \"kubernetes.io/projected/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-kube-api-access-cwt7v\") pod \"redhat-marketplace-s5k6t\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.661531 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.680667 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-kz5fx" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.698631 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-utilities\") pod \"redhat-marketplace-s5k6t\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.698954 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-catalog-content\") pod \"redhat-marketplace-s5k6t\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.699040 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwt7v\" (UniqueName: \"kubernetes.io/projected/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-kube-api-access-cwt7v\") pod \"redhat-marketplace-s5k6t\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.699752 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-utilities\") pod \"redhat-marketplace-s5k6t\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.700112 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-catalog-content\") pod \"redhat-marketplace-s5k6t\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.722874 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwt7v\" (UniqueName: \"kubernetes.io/projected/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-kube-api-access-cwt7v\") pod \"redhat-marketplace-s5k6t\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.750227 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.817006 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7pl8w" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.909188 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63954f23-8000-4ada-8d5d-67297b7c26f6" path="/var/lib/kubelet/pods/63954f23-8000-4ada-8d5d-67297b7c26f6/volumes" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.910403 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 26 21:59:29 crc kubenswrapper[4910]: I0226 21:59:29.911057 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbbce4f0-e239-41ed-98b6-b5b84a303b34" path="/var/lib/kubelet/pods/dbbce4f0-e239-41ed-98b6-b5b84a303b34/volumes" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.025567 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" event={"ID":"0f19e4dc-70a6-4ba2-bf17-6092288974a3","Type":"ContainerStarted","Data":"2997f26acd313d3eef99f7dfda2517c77416102cf0d6999768ef1fcbbb5181e9"} Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.025933 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.030619 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" event={"ID":"b050f320-6f26-4c79-88cc-ceb481369169","Type":"ContainerStarted","Data":"b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2"} Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.031201 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.035598 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.035929 4910 generic.go:334] "Generic (PLEG): container finished" podID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerID="a0fa7765286d128186ff288ec7fd743dc4314157f9c0153f35941ca70421304f" exitCode=0 Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.035973 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp8sv" event={"ID":"d0c0a0be-62f6-4642-aeab-2f08a5cffedb","Type":"ContainerDied","Data":"a0fa7765286d128186ff288ec7fd743dc4314157f9c0153f35941ca70421304f"} Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.035988 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp8sv" event={"ID":"d0c0a0be-62f6-4642-aeab-2f08a5cffedb","Type":"ContainerStarted","Data":"190601bb9091349b4dc597d5df3505f0584b6e281634b1c05905e407d7b88037"} Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.046422 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" podStartSLOduration=3.046406393 podStartE2EDuration="3.046406393s" podCreationTimestamp="2026-02-26 21:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:30.044656193 +0000 UTC m=+255.124146744" watchObservedRunningTime="2026-02-26 21:59:30.046406393 +0000 UTC m=+255.125896934" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.053025 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.053478 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q" event={"ID":"e17437b5-ba61-4630-83bd-8436fcbd659f","Type":"ContainerDied","Data":"d13b09221e64cabe66937ad00a5d1cc73c64702332011953b2398710d7ee513a"} Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.053503 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d13b09221e64cabe66937ad00a5d1cc73c64702332011953b2398710d7ee513a" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.062062 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5k6t"] Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.195687 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" podStartSLOduration=185.195668077 podStartE2EDuration="3m5.195668077s" podCreationTimestamp="2026-02-26 21:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 21:59:30.111386307 +0000 UTC m=+255.190876858" watchObservedRunningTime="2026-02-26 21:59:30.195668077 +0000 UTC m=+255.275158618" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.196287 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.196885 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.201744 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.215251 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.219534 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.283658 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 21:59:30 crc kubenswrapper[4910]: [-]has-synced failed: reason withheld Feb 26 21:59:30 crc kubenswrapper[4910]: [+]process-running ok Feb 26 21:59:30 crc kubenswrapper[4910]: healthz check failed Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.283723 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.315144 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e8141fe-ffee-4e29-9c55-8991c8226b10-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e8141fe-ffee-4e29-9c55-8991c8226b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.315207 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e8141fe-ffee-4e29-9c55-8991c8226b10-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e8141fe-ffee-4e29-9c55-8991c8226b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.343307 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv"] Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.343924 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.347046 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.350556 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.350732 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.350883 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.350983 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.351176 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.355802 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv"] Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.417249 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhwx\" (UniqueName: \"kubernetes.io/projected/caa28600-ac81-41ed-8375-9f479b9292a1-kube-api-access-lfhwx\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.417337 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-config\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.417358 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-client-ca\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.417430 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa28600-ac81-41ed-8375-9f479b9292a1-serving-cert\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.417451 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e8141fe-ffee-4e29-9c55-8991c8226b10-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e8141fe-ffee-4e29-9c55-8991c8226b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.417519 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e8141fe-ffee-4e29-9c55-8991c8226b10-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e8141fe-ffee-4e29-9c55-8991c8226b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.418007 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e8141fe-ffee-4e29-9c55-8991c8226b10-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e8141fe-ffee-4e29-9c55-8991c8226b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.418478 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nwths"] Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.419727 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.424119 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.428392 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nwths"] Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.456470 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e8141fe-ffee-4e29-9c55-8991c8226b10-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e8141fe-ffee-4e29-9c55-8991c8226b10\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.520105 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-utilities\") pod \"redhat-operators-nwths\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.521361 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfhwx\" (UniqueName: \"kubernetes.io/projected/caa28600-ac81-41ed-8375-9f479b9292a1-kube-api-access-lfhwx\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.521425 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-config\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.521445 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-client-ca\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.522211 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-catalog-content\") pod \"redhat-operators-nwths\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.522240 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b5fg\" (UniqueName: \"kubernetes.io/projected/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-kube-api-access-8b5fg\") pod \"redhat-operators-nwths\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.522274 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa28600-ac81-41ed-8375-9f479b9292a1-serving-cert\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.522738 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-client-ca\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.523413 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-config\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.526313 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa28600-ac81-41ed-8375-9f479b9292a1-serving-cert\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.538321 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfhwx\" (UniqueName: \"kubernetes.io/projected/caa28600-ac81-41ed-8375-9f479b9292a1-kube-api-access-lfhwx\") pod \"route-controller-manager-d5db645bc-f6sdv\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.557099 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.623380 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-utilities\") pod \"redhat-operators-nwths\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.623525 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-catalog-content\") pod \"redhat-operators-nwths\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.623549 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b5fg\" (UniqueName: \"kubernetes.io/projected/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-kube-api-access-8b5fg\") pod \"redhat-operators-nwths\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.623877 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-utilities\") pod \"redhat-operators-nwths\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.624120 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-catalog-content\") pod \"redhat-operators-nwths\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.642588 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b5fg\" (UniqueName: \"kubernetes.io/projected/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-kube-api-access-8b5fg\") pod \"redhat-operators-nwths\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.663343 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.741360 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwths" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.763706 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.764091 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.766617 4910 patch_prober.go:28] interesting pod/console-f9d7485db-kj9s2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.766673 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kj9s2" podUID="acb5ada5-3567-4f1c-9130-1e78f3e88975" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.818563 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rsxdq"] Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.819721 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.828844 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsxdq"] Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.931676 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-catalog-content\") pod \"redhat-operators-rsxdq\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.931800 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-utilities\") pod \"redhat-operators-rsxdq\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.931821 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scs7d\" (UniqueName: \"kubernetes.io/projected/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-kube-api-access-scs7d\") pod \"redhat-operators-rsxdq\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.974757 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:30 crc kubenswrapper[4910]: I0226 21:59:30.991071 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2jnr5" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.034802 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-catalog-content\") pod \"redhat-operators-rsxdq\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.035323 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-utilities\") pod \"redhat-operators-rsxdq\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.035515 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-catalog-content\") pod \"redhat-operators-rsxdq\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.035634 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-utilities\") pod \"redhat-operators-rsxdq\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.035689 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scs7d\" (UniqueName: \"kubernetes.io/projected/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-kube-api-access-scs7d\") pod \"redhat-operators-rsxdq\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.059218 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scs7d\" (UniqueName: \"kubernetes.io/projected/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-kube-api-access-scs7d\") pod \"redhat-operators-rsxdq\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.073258 4910 generic.go:334] "Generic (PLEG): container finished" podID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerID="572f007c7788df7de4866d537eb25326180c2fc166c249f328bec52de67f7d8c" exitCode=0 Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.073663 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5k6t" event={"ID":"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3","Type":"ContainerDied","Data":"572f007c7788df7de4866d537eb25326180c2fc166c249f328bec52de67f7d8c"} Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.073694 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5k6t" event={"ID":"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3","Type":"ContainerStarted","Data":"78bc151369ea55981e5d8ee09a78cf15c72ee364e98c6b92d8b9e908a052a642"} Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.092436 4910 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hscq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.092453 4910 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hscq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.092483 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hscq" podUID="7843f81a-d6bd-463f-b5b7-454e3f943ed8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.092499 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hscq" podUID="7843f81a-d6bd-463f-b5b7-454e3f943ed8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.114957 4910 ???:1] "http: TLS handshake error from 192.168.126.11:50746: no serving certificate available for the kubelet" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.151934 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.277667 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.280145 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 21:59:31 crc kubenswrapper[4910]: [-]has-synced failed: reason withheld Feb 26 21:59:31 crc kubenswrapper[4910]: [+]process-running ok Feb 26 21:59:31 crc kubenswrapper[4910]: healthz check failed Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.280247 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.353866 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.354720 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.355932 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.356728 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.361011 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.448370 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.448469 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.550327 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.550389 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.550479 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.580795 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.727770 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 21:59:31 crc kubenswrapper[4910]: I0226 21:59:31.880961 4910 ???:1] "http: TLS handshake error from 192.168.126.11:50748: no serving certificate available for the kubelet" Feb 26 21:59:32 crc kubenswrapper[4910]: I0226 21:59:32.280937 4910 patch_prober.go:28] interesting pod/router-default-5444994796-w8g2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 21:59:32 crc kubenswrapper[4910]: [-]has-synced failed: reason withheld Feb 26 21:59:32 crc kubenswrapper[4910]: [+]process-running ok Feb 26 21:59:32 crc kubenswrapper[4910]: healthz check failed Feb 26 21:59:32 crc kubenswrapper[4910]: I0226 21:59:32.281004 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w8g2c" podUID="ee0c3a2c-59c9-4f63-93c9-94c498a8d065" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 21:59:33 crc kubenswrapper[4910]: I0226 21:59:33.238371 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wzqgc" Feb 26 21:59:33 crc kubenswrapper[4910]: I0226 21:59:33.281665 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:33 crc kubenswrapper[4910]: I0226 21:59:33.288964 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-w8g2c" Feb 26 21:59:36 crc kubenswrapper[4910]: I0226 21:59:36.254462 4910 ???:1] "http: TLS handshake error from 192.168.126.11:50756: no serving certificate available for the kubelet" Feb 26 21:59:40 crc kubenswrapper[4910]: I0226 21:59:40.770139 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:40 crc kubenswrapper[4910]: I0226 21:59:40.775721 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 21:59:41 crc kubenswrapper[4910]: I0226 21:59:41.099641 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2hscq" Feb 26 21:59:46 crc kubenswrapper[4910]: I0226 21:59:46.344980 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fc6d686bb-gzxn6"] Feb 26 21:59:46 crc kubenswrapper[4910]: I0226 21:59:46.347024 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" podUID="0f19e4dc-70a6-4ba2-bf17-6092288974a3" containerName="controller-manager" containerID="cri-o://2997f26acd313d3eef99f7dfda2517c77416102cf0d6999768ef1fcbbb5181e9" gracePeriod=30 Feb 26 21:59:46 crc kubenswrapper[4910]: I0226 21:59:46.362219 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv"] Feb 26 21:59:48 crc kubenswrapper[4910]: I0226 21:59:48.402456 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 21:59:48 crc kubenswrapper[4910]: I0226 21:59:48.673141 4910 patch_prober.go:28] interesting pod/controller-manager-fc6d686bb-gzxn6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Feb 26 21:59:48 crc kubenswrapper[4910]: I0226 21:59:48.673245 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" podUID="0f19e4dc-70a6-4ba2-bf17-6092288974a3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Feb 26 21:59:49 crc kubenswrapper[4910]: I0226 21:59:49.188759 4910 generic.go:334] "Generic (PLEG): container finished" podID="0f19e4dc-70a6-4ba2-bf17-6092288974a3" containerID="2997f26acd313d3eef99f7dfda2517c77416102cf0d6999768ef1fcbbb5181e9" exitCode=0 Feb 26 21:59:49 crc kubenswrapper[4910]: I0226 21:59:49.188824 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" event={"ID":"0f19e4dc-70a6-4ba2-bf17-6092288974a3","Type":"ContainerDied","Data":"2997f26acd313d3eef99f7dfda2517c77416102cf0d6999768ef1fcbbb5181e9"} Feb 26 21:59:52 crc kubenswrapper[4910]: E0226 21:59:52.155975 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 21:59:52 crc kubenswrapper[4910]: E0226 21:59:52.156135 4910 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 21:59:52 crc kubenswrapper[4910]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 21:59:52 crc kubenswrapper[4910]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ft88g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29535718-4rxms_openshift-infra(e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 26 21:59:52 crc kubenswrapper[4910]: > logger="UnhandledError" Feb 26 21:59:52 crc kubenswrapper[4910]: E0226 21:59:52.157299 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29535718-4rxms" podUID="e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05" Feb 26 21:59:52 crc kubenswrapper[4910]: E0226 21:59:52.206190 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29535718-4rxms" podUID="e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:55.727864 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:55.728685 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.759335 4910 ???:1] "http: TLS handshake error from 192.168.126.11:58122: no serving certificate available for the kubelet" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.828040 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.866266 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9d6f98f68-xlh42"] Feb 26 21:59:56 crc kubenswrapper[4910]: E0226 21:59:56.866824 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f19e4dc-70a6-4ba2-bf17-6092288974a3" containerName="controller-manager" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.866850 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f19e4dc-70a6-4ba2-bf17-6092288974a3" containerName="controller-manager" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.866963 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f19e4dc-70a6-4ba2-bf17-6092288974a3" containerName="controller-manager" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.868858 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.872964 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9d6f98f68-xlh42"] Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.986441 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f19e4dc-70a6-4ba2-bf17-6092288974a3-serving-cert\") pod \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.986566 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-client-ca\") pod \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.986600 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-proxy-ca-bundles\") pod \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.986676 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-config\") pod \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.986700 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9wqd\" (UniqueName: \"kubernetes.io/projected/0f19e4dc-70a6-4ba2-bf17-6092288974a3-kube-api-access-t9wqd\") pod \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\" (UID: \"0f19e4dc-70a6-4ba2-bf17-6092288974a3\") " Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.987354 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-client-ca\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.987432 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjmxg\" (UniqueName: \"kubernetes.io/projected/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-kube-api-access-wjmxg\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.987479 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-proxy-ca-bundles\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.987567 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-serving-cert\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.987641 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0f19e4dc-70a6-4ba2-bf17-6092288974a3" (UID: "0f19e4dc-70a6-4ba2-bf17-6092288974a3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.987713 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-config\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.987752 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.987972 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-client-ca" (OuterVolumeSpecName: "client-ca") pod "0f19e4dc-70a6-4ba2-bf17-6092288974a3" (UID: "0f19e4dc-70a6-4ba2-bf17-6092288974a3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.988043 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-config" (OuterVolumeSpecName: "config") pod "0f19e4dc-70a6-4ba2-bf17-6092288974a3" (UID: "0f19e4dc-70a6-4ba2-bf17-6092288974a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.992433 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f19e4dc-70a6-4ba2-bf17-6092288974a3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0f19e4dc-70a6-4ba2-bf17-6092288974a3" (UID: "0f19e4dc-70a6-4ba2-bf17-6092288974a3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 21:59:56 crc kubenswrapper[4910]: I0226 21:59:56.992471 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f19e4dc-70a6-4ba2-bf17-6092288974a3-kube-api-access-t9wqd" (OuterVolumeSpecName: "kube-api-access-t9wqd") pod "0f19e4dc-70a6-4ba2-bf17-6092288974a3" (UID: "0f19e4dc-70a6-4ba2-bf17-6092288974a3"). InnerVolumeSpecName "kube-api-access-t9wqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.089314 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjmxg\" (UniqueName: \"kubernetes.io/projected/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-kube-api-access-wjmxg\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.089384 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-proxy-ca-bundles\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.089460 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-serving-cert\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.089535 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-config\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.089597 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-client-ca\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.089661 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9wqd\" (UniqueName: \"kubernetes.io/projected/0f19e4dc-70a6-4ba2-bf17-6092288974a3-kube-api-access-t9wqd\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.089673 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-config\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.089683 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f19e4dc-70a6-4ba2-bf17-6092288974a3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.089692 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f19e4dc-70a6-4ba2-bf17-6092288974a3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.090751 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-proxy-ca-bundles\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.090807 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-client-ca\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.106423 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-serving-cert\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.114813 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjmxg\" (UniqueName: \"kubernetes.io/projected/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-kube-api-access-wjmxg\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.166458 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-config\") pod \"controller-manager-9d6f98f68-xlh42\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.188723 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.243106 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" event={"ID":"0f19e4dc-70a6-4ba2-bf17-6092288974a3","Type":"ContainerDied","Data":"35a3fcf35cf574c43852a3926e3aa9b5c8786ef17859e34d90d35ca6af3f10f0"} Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.243211 4910 scope.go:117] "RemoveContainer" containerID="2997f26acd313d3eef99f7dfda2517c77416102cf0d6999768ef1fcbbb5181e9" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.243352 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fc6d686bb-gzxn6" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.282862 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fc6d686bb-gzxn6"] Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.288981 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-fc6d686bb-gzxn6"] Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.908779 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f19e4dc-70a6-4ba2-bf17-6092288974a3" path="/var/lib/kubelet/pods/0f19e4dc-70a6-4ba2-bf17-6092288974a3/volumes" Feb 26 21:59:57 crc kubenswrapper[4910]: I0226 21:59:57.941794 4910 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","poddbbce4f0-e239-41ed-98b6-b5b84a303b34"] err="unable to destroy cgroup paths for cgroup [kubepods burstable poddbbce4f0-e239-41ed-98b6-b5b84a303b34] : Timed out while waiting for systemd to remove kubepods-burstable-poddbbce4f0_e239_41ed_98b6_b5b84a303b34.slice" Feb 26 21:59:58 crc kubenswrapper[4910]: E0226 21:59:58.715584 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 21:59:58 crc kubenswrapper[4910]: E0226 21:59:58.715733 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvqdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ggtxj_openshift-marketplace(a8d202d1-b4f6-4bc1-b633-56ba90788979): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 21:59:58 crc kubenswrapper[4910]: E0226 21:59:58.716893 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ggtxj" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.139333 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4"] Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.140761 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.144132 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535720-2nnrc"] Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.144876 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.145014 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535720-2nnrc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.145243 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.146285 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.148957 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4"] Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.151653 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535720-2nnrc"] Feb 26 22:00:00 crc kubenswrapper[4910]: E0226 22:00:00.155298 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ggtxj" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" Feb 26 22:00:00 crc kubenswrapper[4910]: E0226 22:00:00.231104 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 22:00:00 crc kubenswrapper[4910]: E0226 22:00:00.232329 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv27g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hvjfk_openshift-marketplace(84c1eb30-f57d-4387-bc3f-deae490cdc42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.233978 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-config-volume\") pod \"collect-profiles-29535720-ghhr4\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.234033 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkfg2\" (UniqueName: \"kubernetes.io/projected/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-kube-api-access-zkfg2\") pod \"collect-profiles-29535720-ghhr4\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: E0226 22:00:00.234096 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hvjfk" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.234322 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-secret-volume\") pod \"collect-profiles-29535720-ghhr4\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.234422 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qld6\" (UniqueName: \"kubernetes.io/projected/02081600-34a4-4d71-ab04-6214092a36f1-kube-api-access-7qld6\") pod \"auto-csr-approver-29535720-2nnrc\" (UID: \"02081600-34a4-4d71-ab04-6214092a36f1\") " pod="openshift-infra/auto-csr-approver-29535720-2nnrc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.335536 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qld6\" (UniqueName: \"kubernetes.io/projected/02081600-34a4-4d71-ab04-6214092a36f1-kube-api-access-7qld6\") pod \"auto-csr-approver-29535720-2nnrc\" (UID: \"02081600-34a4-4d71-ab04-6214092a36f1\") " pod="openshift-infra/auto-csr-approver-29535720-2nnrc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.335646 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-config-volume\") pod \"collect-profiles-29535720-ghhr4\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.335664 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkfg2\" (UniqueName: \"kubernetes.io/projected/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-kube-api-access-zkfg2\") pod \"collect-profiles-29535720-ghhr4\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.335715 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-secret-volume\") pod \"collect-profiles-29535720-ghhr4\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.336995 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-config-volume\") pod \"collect-profiles-29535720-ghhr4\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.341665 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-secret-volume\") pod \"collect-profiles-29535720-ghhr4\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.350271 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qld6\" (UniqueName: \"kubernetes.io/projected/02081600-34a4-4d71-ab04-6214092a36f1-kube-api-access-7qld6\") pod \"auto-csr-approver-29535720-2nnrc\" (UID: \"02081600-34a4-4d71-ab04-6214092a36f1\") " pod="openshift-infra/auto-csr-approver-29535720-2nnrc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.353335 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkfg2\" (UniqueName: \"kubernetes.io/projected/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-kube-api-access-zkfg2\") pod \"collect-profiles-29535720-ghhr4\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.462489 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.470885 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535720-2nnrc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.747626 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.754555 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.754653 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.851415 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.851494 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.952834 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.953073 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.953182 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:00 crc kubenswrapper[4910]: I0226 22:00:00.969653 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:01 crc kubenswrapper[4910]: I0226 22:00:01.082181 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:01 crc kubenswrapper[4910]: I0226 22:00:01.654249 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r4pg9" Feb 26 22:00:01 crc kubenswrapper[4910]: E0226 22:00:01.850513 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hvjfk" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" Feb 26 22:00:01 crc kubenswrapper[4910]: E0226 22:00:01.920302 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 22:00:01 crc kubenswrapper[4910]: E0226 22:00:01.920604 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xs9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2575w_openshift-marketplace(f51dcf73-fbc7-4a90-849c-448ed9e540f9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 22:00:01 crc kubenswrapper[4910]: E0226 22:00:01.921783 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2575w" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" Feb 26 22:00:01 crc kubenswrapper[4910]: E0226 22:00:01.949707 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 22:00:01 crc kubenswrapper[4910]: E0226 22:00:01.949793 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlvvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6rxsg_openshift-marketplace(4333e88f-8502-46f4-9639-7af62ff1e63c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 22:00:01 crc kubenswrapper[4910]: E0226 22:00:01.950943 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6rxsg" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.269802 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5k6t" event={"ID":"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3","Type":"ContainerStarted","Data":"28443eb4ab9629ea95b8c1566d49d46a7d9559614324f36fb97d960099177f36"} Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.272044 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp8sv" event={"ID":"d0c0a0be-62f6-4642-aeab-2f08a5cffedb","Type":"ContainerStarted","Data":"9f132c529649b765a8d395a4cbdc787898d64afec46b1396115007d792405943"} Feb 26 22:00:02 crc kubenswrapper[4910]: E0226 22:00:02.275503 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2575w" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" Feb 26 22:00:02 crc kubenswrapper[4910]: E0226 22:00:02.288805 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6rxsg" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.365958 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nwths"] Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.388977 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.596180 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.599707 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsxdq"] Feb 26 22:00:02 crc kubenswrapper[4910]: W0226 22:00:02.623297 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec415ba3_1e2f_4ca2_8137_0472c5ca1ea8.slice/crio-79cef861a01a640e6da5e1d936c6e8f72778b88d802efe736f5aa45460befbc1 WatchSource:0}: Error finding container 79cef861a01a640e6da5e1d936c6e8f72778b88d802efe736f5aa45460befbc1: Status 404 returned error can't find the container with id 79cef861a01a640e6da5e1d936c6e8f72778b88d802efe736f5aa45460befbc1 Feb 26 22:00:02 crc kubenswrapper[4910]: W0226 22:00:02.626077 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8e8141fe_ffee_4e29_9c55_8991c8226b10.slice/crio-e66560d246131a5c6b4841f46a413783ff6d28a152b7d470497864f2f2fc76e6 WatchSource:0}: Error finding container e66560d246131a5c6b4841f46a413783ff6d28a152b7d470497864f2f2fc76e6: Status 404 returned error can't find the container with id e66560d246131a5c6b4841f46a413783ff6d28a152b7d470497864f2f2fc76e6 Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.631489 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535720-2nnrc"] Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.638284 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4"] Feb 26 22:00:02 crc kubenswrapper[4910]: W0226 22:00:02.641863 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02081600_34a4_4d71_ab04_6214092a36f1.slice/crio-58a1174950d61aa203b31c55688292043d35d28a13bd37f0d9688c67ac79261c WatchSource:0}: Error finding container 58a1174950d61aa203b31c55688292043d35d28a13bd37f0d9688c67ac79261c: Status 404 returned error can't find the container with id 58a1174950d61aa203b31c55688292043d35d28a13bd37f0d9688c67ac79261c Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.642199 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9d6f98f68-xlh42"] Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.656655 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 22:00:02 crc kubenswrapper[4910]: I0226 22:00:02.662552 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv"] Feb 26 22:00:02 crc kubenswrapper[4910]: W0226 22:00:02.671519 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f16dbf5_b263_4e40_b38b_d615de6d7b2c.slice/crio-f7648512395c88b64445859263265d8d818d1e4050342ad604ff11e1497dd410 WatchSource:0}: Error finding container f7648512395c88b64445859263265d8d818d1e4050342ad604ff11e1497dd410: Status 404 returned error can't find the container with id f7648512395c88b64445859263265d8d818d1e4050342ad604ff11e1497dd410 Feb 26 22:00:02 crc kubenswrapper[4910]: W0226 22:00:02.675436 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod143b25fb_f84c_4cee_9e14_fcaa1a235d0f.slice/crio-24a858f66583545fd628b78d6bf41a318d812dcea3a421d82b792c1a11a3563e WatchSource:0}: Error finding container 24a858f66583545fd628b78d6bf41a318d812dcea3a421d82b792c1a11a3563e: Status 404 returned error can't find the container with id 24a858f66583545fd628b78d6bf41a318d812dcea3a421d82b792c1a11a3563e Feb 26 22:00:02 crc kubenswrapper[4910]: W0226 22:00:02.676966 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeac9ac29_7799_49f9_93e2_9b684ce0eb41.slice/crio-31baaa597db6b84c35ff8b742b8c56b7c0fa36e8d40a483cb3da47de3b67d23c WatchSource:0}: Error finding container 31baaa597db6b84c35ff8b742b8c56b7c0fa36e8d40a483cb3da47de3b67d23c: Status 404 returned error can't find the container with id 31baaa597db6b84c35ff8b742b8c56b7c0fa36e8d40a483cb3da47de3b67d23c Feb 26 22:00:02 crc kubenswrapper[4910]: W0226 22:00:02.697327 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaa28600_ac81_41ed_8375_9f479b9292a1.slice/crio-336017c37a47e2a336e168b8246395f6da76bede3a88316aba15c1bdbeee655b WatchSource:0}: Error finding container 336017c37a47e2a336e168b8246395f6da76bede3a88316aba15c1bdbeee655b: Status 404 returned error can't find the container with id 336017c37a47e2a336e168b8246395f6da76bede3a88316aba15c1bdbeee655b Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.278708 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" event={"ID":"143b25fb-f84c-4cee-9e14-fcaa1a235d0f","Type":"ContainerStarted","Data":"75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.278955 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" event={"ID":"143b25fb-f84c-4cee-9e14-fcaa1a235d0f","Type":"ContainerStarted","Data":"24a858f66583545fd628b78d6bf41a318d812dcea3a421d82b792c1a11a3563e"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.280813 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.285502 4910 generic.go:334] "Generic (PLEG): container finished" podID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerID="28443eb4ab9629ea95b8c1566d49d46a7d9559614324f36fb97d960099177f36" exitCode=0 Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.285555 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5k6t" event={"ID":"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3","Type":"ContainerDied","Data":"28443eb4ab9629ea95b8c1566d49d46a7d9559614324f36fb97d960099177f36"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.285741 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.288095 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"73eb2ee7-3e19-4ab4-9479-484b4c85802a","Type":"ContainerStarted","Data":"8876d43125609b872da8a7d02bfe06ac58d1b84f5140773f4cdebd31d6ab6fdf"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.288131 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"73eb2ee7-3e19-4ab4-9479-484b4c85802a","Type":"ContainerStarted","Data":"fe679d8fd4cf5564f91b5391be97432c814d086232e019961715202dd7d335d5"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.296189 4910 generic.go:334] "Generic (PLEG): container finished" podID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerID="9f132c529649b765a8d395a4cbdc787898d64afec46b1396115007d792405943" exitCode=0 Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.296264 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp8sv" event={"ID":"d0c0a0be-62f6-4642-aeab-2f08a5cffedb","Type":"ContainerDied","Data":"9f132c529649b765a8d395a4cbdc787898d64afec46b1396115007d792405943"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.308740 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eac9ac29-7799-49f9-93e2-9b684ce0eb41","Type":"ContainerStarted","Data":"cf3c4e5862f95e0522162322088512c5b470309db36aab6cfcbd048387fbb98c"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.308802 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eac9ac29-7799-49f9-93e2-9b684ce0eb41","Type":"ContainerStarted","Data":"31baaa597db6b84c35ff8b742b8c56b7c0fa36e8d40a483cb3da47de3b67d23c"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.311063 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e8141fe-ffee-4e29-9c55-8991c8226b10","Type":"ContainerStarted","Data":"d8dbedeece53769161a6c55ff790849ddbf0c48126f0b7b7caae0b00c6c39998"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.311108 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e8141fe-ffee-4e29-9c55-8991c8226b10","Type":"ContainerStarted","Data":"e66560d246131a5c6b4841f46a413783ff6d28a152b7d470497864f2f2fc76e6"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.313200 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" event={"ID":"8f16dbf5-b263-4e40-b38b-d615de6d7b2c","Type":"ContainerStarted","Data":"d7541c5a27dfa8fced56b1bff62df57958de7cd11dd4ec435a8bbb91dee05ea6"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.313232 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" event={"ID":"8f16dbf5-b263-4e40-b38b-d615de6d7b2c","Type":"ContainerStarted","Data":"f7648512395c88b64445859263265d8d818d1e4050342ad604ff11e1497dd410"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.314821 4910 generic.go:334] "Generic (PLEG): container finished" podID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerID="5e88ae7cedf3cdb7a06a33d12a7c167d1929e9b7f321ef778d8741e1285510e3" exitCode=0 Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.314855 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwths" event={"ID":"91f7141d-853e-4d6f-9b04-ad16b61d0dc7","Type":"ContainerDied","Data":"5e88ae7cedf3cdb7a06a33d12a7c167d1929e9b7f321ef778d8741e1285510e3"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.314893 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwths" event={"ID":"91f7141d-853e-4d6f-9b04-ad16b61d0dc7","Type":"ContainerStarted","Data":"a9ee81343f63d57c616b9d819386d61ac5da9e436e751c59954f09f9358eaee7"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.317094 4910 generic.go:334] "Generic (PLEG): container finished" podID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerID="0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba" exitCode=0 Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.317632 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsxdq" event={"ID":"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8","Type":"ContainerDied","Data":"0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.317656 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsxdq" event={"ID":"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8","Type":"ContainerStarted","Data":"79cef861a01a640e6da5e1d936c6e8f72778b88d802efe736f5aa45460befbc1"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.319687 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535720-2nnrc" event={"ID":"02081600-34a4-4d71-ab04-6214092a36f1","Type":"ContainerStarted","Data":"58a1174950d61aa203b31c55688292043d35d28a13bd37f0d9688c67ac79261c"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.321926 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" event={"ID":"caa28600-ac81-41ed-8375-9f479b9292a1","Type":"ContainerStarted","Data":"f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.322046 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" event={"ID":"caa28600-ac81-41ed-8375-9f479b9292a1","Type":"ContainerStarted","Data":"336017c37a47e2a336e168b8246395f6da76bede3a88316aba15c1bdbeee655b"} Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.322229 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" podUID="caa28600-ac81-41ed-8375-9f479b9292a1" containerName="route-controller-manager" containerID="cri-o://f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c" gracePeriod=30 Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.322897 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.328139 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" podStartSLOduration=17.328123421 podStartE2EDuration="17.328123421s" podCreationTimestamp="2026-02-26 21:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:03.31441577 +0000 UTC m=+288.393906331" watchObservedRunningTime="2026-02-26 22:00:03.328123421 +0000 UTC m=+288.407613962" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.328258 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=33.328253335 podStartE2EDuration="33.328253335s" podCreationTimestamp="2026-02-26 21:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:03.327585047 +0000 UTC m=+288.407075598" watchObservedRunningTime="2026-02-26 22:00:03.328253335 +0000 UTC m=+288.407743876" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.345665 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" podStartSLOduration=3.345649886 podStartE2EDuration="3.345649886s" podCreationTimestamp="2026-02-26 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:03.338011209 +0000 UTC m=+288.417501770" watchObservedRunningTime="2026-02-26 22:00:03.345649886 +0000 UTC m=+288.425140427" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.354623 4910 patch_prober.go:28] interesting pod/route-controller-manager-d5db645bc-f6sdv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": read tcp 10.217.0.2:51306->10.217.0.53:8443: read: connection reset by peer" start-of-body= Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.354673 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" podUID="caa28600-ac81-41ed-8375-9f479b9292a1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": read tcp 10.217.0.2:51306->10.217.0.53:8443: read: connection reset by peer" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.368105 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.368083033 podStartE2EDuration="3.368083033s" podCreationTimestamp="2026-02-26 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:03.367763763 +0000 UTC m=+288.447254314" watchObservedRunningTime="2026-02-26 22:00:03.368083033 +0000 UTC m=+288.447573574" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.383015 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=32.383000066 podStartE2EDuration="32.383000066s" podCreationTimestamp="2026-02-26 21:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:03.382285877 +0000 UTC m=+288.461776418" watchObservedRunningTime="2026-02-26 22:00:03.383000066 +0000 UTC m=+288.462490607" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.460974 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" podStartSLOduration=36.460948405 podStartE2EDuration="36.460948405s" podCreationTimestamp="2026-02-26 21:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:03.459917807 +0000 UTC m=+288.539408358" watchObservedRunningTime="2026-02-26 22:00:03.460948405 +0000 UTC m=+288.540438946" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.702037 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-d5db645bc-f6sdv_caa28600-ac81-41ed-8375-9f479b9292a1/route-controller-manager/0.log" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.702096 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.738419 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8"] Feb 26 22:00:03 crc kubenswrapper[4910]: E0226 22:00:03.738644 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa28600-ac81-41ed-8375-9f479b9292a1" containerName="route-controller-manager" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.738656 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa28600-ac81-41ed-8375-9f479b9292a1" containerName="route-controller-manager" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.738757 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa28600-ac81-41ed-8375-9f479b9292a1" containerName="route-controller-manager" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.739133 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.761865 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8"] Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.809124 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfhwx\" (UniqueName: \"kubernetes.io/projected/caa28600-ac81-41ed-8375-9f479b9292a1-kube-api-access-lfhwx\") pod \"caa28600-ac81-41ed-8375-9f479b9292a1\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.809475 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-client-ca\") pod \"caa28600-ac81-41ed-8375-9f479b9292a1\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.809506 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa28600-ac81-41ed-8375-9f479b9292a1-serving-cert\") pod \"caa28600-ac81-41ed-8375-9f479b9292a1\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.809587 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-config\") pod \"caa28600-ac81-41ed-8375-9f479b9292a1\" (UID: \"caa28600-ac81-41ed-8375-9f479b9292a1\") " Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.809748 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fv7b\" (UniqueName: \"kubernetes.io/projected/33ffe8a0-473c-45e5-94c1-9ff468579817-kube-api-access-7fv7b\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.809801 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-config\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.809828 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-client-ca\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.809879 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33ffe8a0-473c-45e5-94c1-9ff468579817-serving-cert\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.810322 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "caa28600-ac81-41ed-8375-9f479b9292a1" (UID: "caa28600-ac81-41ed-8375-9f479b9292a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.810563 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-config" (OuterVolumeSpecName: "config") pod "caa28600-ac81-41ed-8375-9f479b9292a1" (UID: "caa28600-ac81-41ed-8375-9f479b9292a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.815381 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa28600-ac81-41ed-8375-9f479b9292a1-kube-api-access-lfhwx" (OuterVolumeSpecName: "kube-api-access-lfhwx") pod "caa28600-ac81-41ed-8375-9f479b9292a1" (UID: "caa28600-ac81-41ed-8375-9f479b9292a1"). InnerVolumeSpecName "kube-api-access-lfhwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.815973 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa28600-ac81-41ed-8375-9f479b9292a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "caa28600-ac81-41ed-8375-9f479b9292a1" (UID: "caa28600-ac81-41ed-8375-9f479b9292a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.911461 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fv7b\" (UniqueName: \"kubernetes.io/projected/33ffe8a0-473c-45e5-94c1-9ff468579817-kube-api-access-7fv7b\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.911594 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-config\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.911630 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-client-ca\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.911687 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33ffe8a0-473c-45e5-94c1-9ff468579817-serving-cert\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.911729 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfhwx\" (UniqueName: \"kubernetes.io/projected/caa28600-ac81-41ed-8375-9f479b9292a1-kube-api-access-lfhwx\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.911744 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.911757 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa28600-ac81-41ed-8375-9f479b9292a1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.911770 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa28600-ac81-41ed-8375-9f479b9292a1-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.914915 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33ffe8a0-473c-45e5-94c1-9ff468579817-serving-cert\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.916036 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-client-ca\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.916214 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-config\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:03 crc kubenswrapper[4910]: I0226 22:00:03.929409 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fv7b\" (UniqueName: \"kubernetes.io/projected/33ffe8a0-473c-45e5-94c1-9ff468579817-kube-api-access-7fv7b\") pod \"route-controller-manager-54bd779d8f-kdph8\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.072124 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.302036 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8"] Feb 26 22:00:04 crc kubenswrapper[4910]: W0226 22:00:04.312534 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ffe8a0_473c_45e5_94c1_9ff468579817.slice/crio-804d5f3153aaee74fe091ec7e06375eb72e3c4f4ec2e5417ad97d576873552ac WatchSource:0}: Error finding container 804d5f3153aaee74fe091ec7e06375eb72e3c4f4ec2e5417ad97d576873552ac: Status 404 returned error can't find the container with id 804d5f3153aaee74fe091ec7e06375eb72e3c4f4ec2e5417ad97d576873552ac Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.329324 4910 generic.go:334] "Generic (PLEG): container finished" podID="73eb2ee7-3e19-4ab4-9479-484b4c85802a" containerID="8876d43125609b872da8a7d02bfe06ac58d1b84f5140773f4cdebd31d6ab6fdf" exitCode=0 Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.329431 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"73eb2ee7-3e19-4ab4-9479-484b4c85802a","Type":"ContainerDied","Data":"8876d43125609b872da8a7d02bfe06ac58d1b84f5140773f4cdebd31d6ab6fdf"} Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.331752 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp8sv" event={"ID":"d0c0a0be-62f6-4642-aeab-2f08a5cffedb","Type":"ContainerStarted","Data":"c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790"} Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.333482 4910 generic.go:334] "Generic (PLEG): container finished" podID="eac9ac29-7799-49f9-93e2-9b684ce0eb41" containerID="cf3c4e5862f95e0522162322088512c5b470309db36aab6cfcbd048387fbb98c" exitCode=0 Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.333512 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eac9ac29-7799-49f9-93e2-9b684ce0eb41","Type":"ContainerDied","Data":"cf3c4e5862f95e0522162322088512c5b470309db36aab6cfcbd048387fbb98c"} Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.335442 4910 generic.go:334] "Generic (PLEG): container finished" podID="8e8141fe-ffee-4e29-9c55-8991c8226b10" containerID="d8dbedeece53769161a6c55ff790849ddbf0c48126f0b7b7caae0b00c6c39998" exitCode=0 Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.335502 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e8141fe-ffee-4e29-9c55-8991c8226b10","Type":"ContainerDied","Data":"d8dbedeece53769161a6c55ff790849ddbf0c48126f0b7b7caae0b00c6c39998"} Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.336796 4910 generic.go:334] "Generic (PLEG): container finished" podID="8f16dbf5-b263-4e40-b38b-d615de6d7b2c" containerID="d7541c5a27dfa8fced56b1bff62df57958de7cd11dd4ec435a8bbb91dee05ea6" exitCode=0 Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.336860 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" event={"ID":"8f16dbf5-b263-4e40-b38b-d615de6d7b2c","Type":"ContainerDied","Data":"d7541c5a27dfa8fced56b1bff62df57958de7cd11dd4ec435a8bbb91dee05ea6"} Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.339067 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-d5db645bc-f6sdv_caa28600-ac81-41ed-8375-9f479b9292a1/route-controller-manager/0.log" Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.339096 4910 generic.go:334] "Generic (PLEG): container finished" podID="caa28600-ac81-41ed-8375-9f479b9292a1" containerID="f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c" exitCode=255 Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.339152 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" event={"ID":"caa28600-ac81-41ed-8375-9f479b9292a1","Type":"ContainerDied","Data":"f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c"} Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.339193 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" event={"ID":"caa28600-ac81-41ed-8375-9f479b9292a1","Type":"ContainerDied","Data":"336017c37a47e2a336e168b8246395f6da76bede3a88316aba15c1bdbeee655b"} Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.339210 4910 scope.go:117] "RemoveContainer" containerID="f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c" Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.339342 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv" Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.355201 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" event={"ID":"33ffe8a0-473c-45e5-94c1-9ff468579817","Type":"ContainerStarted","Data":"804d5f3153aaee74fe091ec7e06375eb72e3c4f4ec2e5417ad97d576873552ac"} Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.370425 4910 scope.go:117] "RemoveContainer" containerID="f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c" Feb 26 22:00:04 crc kubenswrapper[4910]: E0226 22:00:04.371405 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c\": container with ID starting with f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c not found: ID does not exist" containerID="f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c" Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.371457 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c"} err="failed to get container status \"f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c\": rpc error: code = NotFound desc = could not find container \"f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c\": container with ID starting with f3a8c04e76e7e9b1334ca05cd74ea785afbf40a48c8d07bdd06a00bc2b6ec41c not found: ID does not exist" Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.378635 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5k6t" event={"ID":"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3","Type":"ContainerStarted","Data":"d920bfcbc4f1189e3e01cd2b812a7b0821590d60a9e8bfa402af519b0c639c7b"} Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.398791 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv"] Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.403944 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5db645bc-f6sdv"] Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.459206 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dp8sv" podStartSLOduration=1.6999616720000001 podStartE2EDuration="35.459187877s" podCreationTimestamp="2026-02-26 21:59:29 +0000 UTC" firstStartedPulling="2026-02-26 21:59:30.037023943 +0000 UTC m=+255.116514484" lastFinishedPulling="2026-02-26 22:00:03.796250148 +0000 UTC m=+288.875740689" observedRunningTime="2026-02-26 22:00:04.455273591 +0000 UTC m=+289.534764132" watchObservedRunningTime="2026-02-26 22:00:04.459187877 +0000 UTC m=+289.538678418" Feb 26 22:00:04 crc kubenswrapper[4910]: I0226 22:00:04.474433 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5k6t" podStartSLOduration=8.813014561 podStartE2EDuration="35.474418809s" podCreationTimestamp="2026-02-26 21:59:29 +0000 UTC" firstStartedPulling="2026-02-26 21:59:37.040579789 +0000 UTC m=+262.120070330" lastFinishedPulling="2026-02-26 22:00:03.701984037 +0000 UTC m=+288.781474578" observedRunningTime="2026-02-26 22:00:04.472380753 +0000 UTC m=+289.551871294" watchObservedRunningTime="2026-02-26 22:00:04.474418809 +0000 UTC m=+289.553909350" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.392320 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" event={"ID":"33ffe8a0-473c-45e5-94c1-9ff468579817","Type":"ContainerStarted","Data":"f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb"} Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.393169 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.399089 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.411295 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" podStartSLOduration=19.411276349 podStartE2EDuration="19.411276349s" podCreationTimestamp="2026-02-26 21:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:05.410145418 +0000 UTC m=+290.489635959" watchObservedRunningTime="2026-02-26 22:00:05.411276349 +0000 UTC m=+290.490766890" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.768526 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.841787 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kube-api-access\") pod \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\" (UID: \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\") " Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.841839 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kubelet-dir\") pod \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\" (UID: \"73eb2ee7-3e19-4ab4-9479-484b4c85802a\") " Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.842135 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73eb2ee7-3e19-4ab4-9479-484b4c85802a" (UID: "73eb2ee7-3e19-4ab4-9479-484b4c85802a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.850759 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73eb2ee7-3e19-4ab4-9479-484b4c85802a" (UID: "73eb2ee7-3e19-4ab4-9479-484b4c85802a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.859307 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.875781 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.884769 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.916312 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa28600-ac81-41ed-8375-9f479b9292a1" path="/var/lib/kubelet/pods/caa28600-ac81-41ed-8375-9f479b9292a1/volumes" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.942685 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-secret-volume\") pod \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.942727 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e8141fe-ffee-4e29-9c55-8991c8226b10-kubelet-dir\") pod \"8e8141fe-ffee-4e29-9c55-8991c8226b10\" (UID: \"8e8141fe-ffee-4e29-9c55-8991c8226b10\") " Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.942747 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kube-api-access\") pod \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\" (UID: \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\") " Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.942811 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkfg2\" (UniqueName: \"kubernetes.io/projected/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-kube-api-access-zkfg2\") pod \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.942845 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kubelet-dir\") pod \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\" (UID: \"eac9ac29-7799-49f9-93e2-9b684ce0eb41\") " Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.942870 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e8141fe-ffee-4e29-9c55-8991c8226b10-kube-api-access\") pod \"8e8141fe-ffee-4e29-9c55-8991c8226b10\" (UID: \"8e8141fe-ffee-4e29-9c55-8991c8226b10\") " Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.942889 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-config-volume\") pod \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\" (UID: \"8f16dbf5-b263-4e40-b38b-d615de6d7b2c\") " Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.943119 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.943129 4910 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73eb2ee7-3e19-4ab4-9479-484b4c85802a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.943663 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f16dbf5-b263-4e40-b38b-d615de6d7b2c" (UID: "8f16dbf5-b263-4e40-b38b-d615de6d7b2c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.943698 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8141fe-ffee-4e29-9c55-8991c8226b10-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e8141fe-ffee-4e29-9c55-8991c8226b10" (UID: "8e8141fe-ffee-4e29-9c55-8991c8226b10"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.944999 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eac9ac29-7799-49f9-93e2-9b684ce0eb41" (UID: "eac9ac29-7799-49f9-93e2-9b684ce0eb41"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.948286 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f16dbf5-b263-4e40-b38b-d615de6d7b2c" (UID: "8f16dbf5-b263-4e40-b38b-d615de6d7b2c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.949444 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8141fe-ffee-4e29-9c55-8991c8226b10-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e8141fe-ffee-4e29-9c55-8991c8226b10" (UID: "8e8141fe-ffee-4e29-9c55-8991c8226b10"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.962802 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eac9ac29-7799-49f9-93e2-9b684ce0eb41" (UID: "eac9ac29-7799-49f9-93e2-9b684ce0eb41"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:05 crc kubenswrapper[4910]: I0226 22:00:05.962981 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-kube-api-access-zkfg2" (OuterVolumeSpecName: "kube-api-access-zkfg2") pod "8f16dbf5-b263-4e40-b38b-d615de6d7b2c" (UID: "8f16dbf5-b263-4e40-b38b-d615de6d7b2c"). InnerVolumeSpecName "kube-api-access-zkfg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.043890 4910 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.043926 4910 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e8141fe-ffee-4e29-9c55-8991c8226b10-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.043936 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.043946 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkfg2\" (UniqueName: \"kubernetes.io/projected/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-kube-api-access-zkfg2\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.043954 4910 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eac9ac29-7799-49f9-93e2-9b684ce0eb41-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.043962 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e8141fe-ffee-4e29-9c55-8991c8226b10-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.043970 4910 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f16dbf5-b263-4e40-b38b-d615de6d7b2c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.117783 4910 csr.go:261] certificate signing request csr-jfzwv is approved, waiting to be issued Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.123036 4910 csr.go:257] certificate signing request csr-jfzwv is issued Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.334953 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9d6f98f68-xlh42"] Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.399247 4910 generic.go:334] "Generic (PLEG): container finished" podID="02081600-34a4-4d71-ab04-6214092a36f1" containerID="1ef6b7e58b3ecf8b8a3fb213e7b305e0e3ac49b86f4b924ba9474d31a09ebe25" exitCode=0 Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.399318 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535720-2nnrc" event={"ID":"02081600-34a4-4d71-ab04-6214092a36f1","Type":"ContainerDied","Data":"1ef6b7e58b3ecf8b8a3fb213e7b305e0e3ac49b86f4b924ba9474d31a09ebe25"} Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.402286 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e8141fe-ffee-4e29-9c55-8991c8226b10","Type":"ContainerDied","Data":"e66560d246131a5c6b4841f46a413783ff6d28a152b7d470497864f2f2fc76e6"} Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.402315 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e66560d246131a5c6b4841f46a413783ff6d28a152b7d470497864f2f2fc76e6" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.402361 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.404596 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.404609 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4" event={"ID":"8f16dbf5-b263-4e40-b38b-d615de6d7b2c","Type":"ContainerDied","Data":"f7648512395c88b64445859263265d8d818d1e4050342ad604ff11e1497dd410"} Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.404737 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7648512395c88b64445859263265d8d818d1e4050342ad604ff11e1497dd410" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.412891 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"73eb2ee7-3e19-4ab4-9479-484b4c85802a","Type":"ContainerDied","Data":"fe679d8fd4cf5564f91b5391be97432c814d086232e019961715202dd7d335d5"} Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.412922 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe679d8fd4cf5564f91b5391be97432c814d086232e019961715202dd7d335d5" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.412903 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.415336 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eac9ac29-7799-49f9-93e2-9b684ce0eb41","Type":"ContainerDied","Data":"31baaa597db6b84c35ff8b742b8c56b7c0fa36e8d40a483cb3da47de3b67d23c"} Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.415370 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31baaa597db6b84c35ff8b742b8c56b7c0fa36e8d40a483cb3da47de3b67d23c" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.415446 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.415606 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" podUID="143b25fb-f84c-4cee-9e14-fcaa1a235d0f" containerName="controller-manager" containerID="cri-o://75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d" gracePeriod=30 Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.435627 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8"] Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.819909 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.947046 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 22:00:06 crc kubenswrapper[4910]: E0226 22:00:06.947274 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8141fe-ffee-4e29-9c55-8991c8226b10" containerName="pruner" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.947286 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8141fe-ffee-4e29-9c55-8991c8226b10" containerName="pruner" Feb 26 22:00:06 crc kubenswrapper[4910]: E0226 22:00:06.947299 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac9ac29-7799-49f9-93e2-9b684ce0eb41" containerName="pruner" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.947304 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac9ac29-7799-49f9-93e2-9b684ce0eb41" containerName="pruner" Feb 26 22:00:06 crc kubenswrapper[4910]: E0226 22:00:06.947788 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143b25fb-f84c-4cee-9e14-fcaa1a235d0f" containerName="controller-manager" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.947811 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="143b25fb-f84c-4cee-9e14-fcaa1a235d0f" containerName="controller-manager" Feb 26 22:00:06 crc kubenswrapper[4910]: E0226 22:00:06.947865 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f16dbf5-b263-4e40-b38b-d615de6d7b2c" containerName="collect-profiles" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.947872 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f16dbf5-b263-4e40-b38b-d615de6d7b2c" containerName="collect-profiles" Feb 26 22:00:06 crc kubenswrapper[4910]: E0226 22:00:06.947880 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73eb2ee7-3e19-4ab4-9479-484b4c85802a" containerName="pruner" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.947885 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="73eb2ee7-3e19-4ab4-9479-484b4c85802a" containerName="pruner" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.949041 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac9ac29-7799-49f9-93e2-9b684ce0eb41" containerName="pruner" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.949064 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="143b25fb-f84c-4cee-9e14-fcaa1a235d0f" containerName="controller-manager" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.949073 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f16dbf5-b263-4e40-b38b-d615de6d7b2c" containerName="collect-profiles" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.949082 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8141fe-ffee-4e29-9c55-8991c8226b10" containerName="pruner" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.949091 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="73eb2ee7-3e19-4ab4-9479-484b4c85802a" containerName="pruner" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.949448 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.953350 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.954406 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.954570 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.955512 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-client-ca\") pod \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.955566 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-config\") pod \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.955634 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-serving-cert\") pod \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.955687 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjmxg\" (UniqueName: \"kubernetes.io/projected/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-kube-api-access-wjmxg\") pod \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.955735 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-proxy-ca-bundles\") pod \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\" (UID: \"143b25fb-f84c-4cee-9e14-fcaa1a235d0f\") " Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.956598 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "143b25fb-f84c-4cee-9e14-fcaa1a235d0f" (UID: "143b25fb-f84c-4cee-9e14-fcaa1a235d0f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.956646 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "143b25fb-f84c-4cee-9e14-fcaa1a235d0f" (UID: "143b25fb-f84c-4cee-9e14-fcaa1a235d0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.957330 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-config" (OuterVolumeSpecName: "config") pod "143b25fb-f84c-4cee-9e14-fcaa1a235d0f" (UID: "143b25fb-f84c-4cee-9e14-fcaa1a235d0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.963113 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "143b25fb-f84c-4cee-9e14-fcaa1a235d0f" (UID: "143b25fb-f84c-4cee-9e14-fcaa1a235d0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:00:06 crc kubenswrapper[4910]: I0226 22:00:06.963972 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-kube-api-access-wjmxg" (OuterVolumeSpecName: "kube-api-access-wjmxg") pod "143b25fb-f84c-4cee-9e14-fcaa1a235d0f" (UID: "143b25fb-f84c-4cee-9e14-fcaa1a235d0f"). InnerVolumeSpecName "kube-api-access-wjmxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.057486 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.057598 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-var-lock\") pod \"installer-9-crc\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.057646 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb0f143-e336-4b54-a769-47390935e034-kube-api-access\") pod \"installer-9-crc\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.057678 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.057690 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjmxg\" (UniqueName: \"kubernetes.io/projected/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-kube-api-access-wjmxg\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.057699 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.057709 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.057720 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143b25fb-f84c-4cee-9e14-fcaa1a235d0f-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.124407 4910 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-01 02:16:23.459974084 +0000 UTC Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.124452 4910 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7396h16m16.335525099s for next certificate rotation Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.158415 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb0f143-e336-4b54-a769-47390935e034-kube-api-access\") pod \"installer-9-crc\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.158500 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.158592 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.158717 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-var-lock\") pod \"installer-9-crc\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.158788 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-var-lock\") pod \"installer-9-crc\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.174247 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb0f143-e336-4b54-a769-47390935e034-kube-api-access\") pod \"installer-9-crc\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.294052 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.381365 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9"] Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.382073 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.420902 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9"] Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.438077 4910 generic.go:334] "Generic (PLEG): container finished" podID="143b25fb-f84c-4cee-9e14-fcaa1a235d0f" containerID="75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d" exitCode=0 Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.438202 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.438485 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" event={"ID":"143b25fb-f84c-4cee-9e14-fcaa1a235d0f","Type":"ContainerDied","Data":"75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d"} Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.438547 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d6f98f68-xlh42" event={"ID":"143b25fb-f84c-4cee-9e14-fcaa1a235d0f","Type":"ContainerDied","Data":"24a858f66583545fd628b78d6bf41a318d812dcea3a421d82b792c1a11a3563e"} Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.438574 4910 scope.go:117] "RemoveContainer" containerID="75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.462989 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-client-ca\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.463044 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446hw\" (UniqueName: \"kubernetes.io/projected/962a42b1-d418-4ffb-97a2-88ee898dba75-kube-api-access-446hw\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.463071 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/962a42b1-d418-4ffb-97a2-88ee898dba75-serving-cert\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.463122 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-proxy-ca-bundles\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.463309 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-config\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.468969 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9d6f98f68-xlh42"] Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.471748 4910 scope.go:117] "RemoveContainer" containerID="75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d" Feb 26 22:00:07 crc kubenswrapper[4910]: E0226 22:00:07.473045 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d\": container with ID starting with 75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d not found: ID does not exist" containerID="75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.473096 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d"} err="failed to get container status \"75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d\": rpc error: code = NotFound desc = could not find container \"75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d\": container with ID starting with 75b5e2da0338db9343e02888db15df6f65aa688cad046c8573731114f21d960d not found: ID does not exist" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.473882 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9d6f98f68-xlh42"] Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.532782 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 22:00:07 crc kubenswrapper[4910]: W0226 22:00:07.545890 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podddb0f143_e336_4b54_a769_47390935e034.slice/crio-e8eec41f1f6c567c6f153ae34e39234c06165626a5c86403f657c56cae53345e WatchSource:0}: Error finding container e8eec41f1f6c567c6f153ae34e39234c06165626a5c86403f657c56cae53345e: Status 404 returned error can't find the container with id e8eec41f1f6c567c6f153ae34e39234c06165626a5c86403f657c56cae53345e Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.564730 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-client-ca\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.564797 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-446hw\" (UniqueName: \"kubernetes.io/projected/962a42b1-d418-4ffb-97a2-88ee898dba75-kube-api-access-446hw\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.564839 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-proxy-ca-bundles\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.564863 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/962a42b1-d418-4ffb-97a2-88ee898dba75-serving-cert\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.564954 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-config\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.565969 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-client-ca\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.566965 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-proxy-ca-bundles\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.568002 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-config\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.572991 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/962a42b1-d418-4ffb-97a2-88ee898dba75-serving-cert\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.589256 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-446hw\" (UniqueName: \"kubernetes.io/projected/962a42b1-d418-4ffb-97a2-88ee898dba75-kube-api-access-446hw\") pod \"controller-manager-7f96dfc5f5-6glx9\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.643249 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535720-2nnrc" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.738570 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.767782 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qld6\" (UniqueName: \"kubernetes.io/projected/02081600-34a4-4d71-ab04-6214092a36f1-kube-api-access-7qld6\") pod \"02081600-34a4-4d71-ab04-6214092a36f1\" (UID: \"02081600-34a4-4d71-ab04-6214092a36f1\") " Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.771645 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02081600-34a4-4d71-ab04-6214092a36f1-kube-api-access-7qld6" (OuterVolumeSpecName: "kube-api-access-7qld6") pod "02081600-34a4-4d71-ab04-6214092a36f1" (UID: "02081600-34a4-4d71-ab04-6214092a36f1"). InnerVolumeSpecName "kube-api-access-7qld6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.869356 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qld6\" (UniqueName: \"kubernetes.io/projected/02081600-34a4-4d71-ab04-6214092a36f1-kube-api-access-7qld6\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.909369 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143b25fb-f84c-4cee-9e14-fcaa1a235d0f" path="/var/lib/kubelet/pods/143b25fb-f84c-4cee-9e14-fcaa1a235d0f/volumes" Feb 26 22:00:07 crc kubenswrapper[4910]: I0226 22:00:07.914833 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9"] Feb 26 22:00:07 crc kubenswrapper[4910]: W0226 22:00:07.927881 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962a42b1_d418_4ffb_97a2_88ee898dba75.slice/crio-45e4d537d28db432b90353d99caaef341a82b7413e849321419c848a3de5310f WatchSource:0}: Error finding container 45e4d537d28db432b90353d99caaef341a82b7413e849321419c848a3de5310f: Status 404 returned error can't find the container with id 45e4d537d28db432b90353d99caaef341a82b7413e849321419c848a3de5310f Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.125058 4910 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-07 23:02:03.481696655 +0000 UTC Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.125393 4910 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6097h1m55.356307526s for next certificate rotation Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.447645 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535720-2nnrc" Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.447649 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535720-2nnrc" event={"ID":"02081600-34a4-4d71-ab04-6214092a36f1","Type":"ContainerDied","Data":"58a1174950d61aa203b31c55688292043d35d28a13bd37f0d9688c67ac79261c"} Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.447789 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58a1174950d61aa203b31c55688292043d35d28a13bd37f0d9688c67ac79261c" Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.453084 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ddb0f143-e336-4b54-a769-47390935e034","Type":"ContainerStarted","Data":"74254b6e477360573b7393cb9995fc4fe752b558e13cd9d3198153f3f616d123"} Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.453147 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ddb0f143-e336-4b54-a769-47390935e034","Type":"ContainerStarted","Data":"e8eec41f1f6c567c6f153ae34e39234c06165626a5c86403f657c56cae53345e"} Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.455154 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" event={"ID":"962a42b1-d418-4ffb-97a2-88ee898dba75","Type":"ContainerStarted","Data":"456038e26d7386a772f38b4795dba7fedbcdde6dd0895dd463be136f3714347f"} Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.455210 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" event={"ID":"962a42b1-d418-4ffb-97a2-88ee898dba75","Type":"ContainerStarted","Data":"45e4d537d28db432b90353d99caaef341a82b7413e849321419c848a3de5310f"} Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.455282 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" podUID="33ffe8a0-473c-45e5-94c1-9ff468579817" containerName="route-controller-manager" containerID="cri-o://f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb" gracePeriod=30 Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.496588 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.496572083 podStartE2EDuration="2.496572083s" podCreationTimestamp="2026-02-26 22:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:08.472632086 +0000 UTC m=+293.552122627" watchObservedRunningTime="2026-02-26 22:00:08.496572083 +0000 UTC m=+293.576062614" Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.497696 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" podStartSLOduration=2.497691214 podStartE2EDuration="2.497691214s" podCreationTimestamp="2026-02-26 22:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:08.494022385 +0000 UTC m=+293.573512916" watchObservedRunningTime="2026-02-26 22:00:08.497691214 +0000 UTC m=+293.577181745" Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.844000 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.986559 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-config\") pod \"33ffe8a0-473c-45e5-94c1-9ff468579817\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.986633 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fv7b\" (UniqueName: \"kubernetes.io/projected/33ffe8a0-473c-45e5-94c1-9ff468579817-kube-api-access-7fv7b\") pod \"33ffe8a0-473c-45e5-94c1-9ff468579817\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.986681 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-client-ca\") pod \"33ffe8a0-473c-45e5-94c1-9ff468579817\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.986727 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33ffe8a0-473c-45e5-94c1-9ff468579817-serving-cert\") pod \"33ffe8a0-473c-45e5-94c1-9ff468579817\" (UID: \"33ffe8a0-473c-45e5-94c1-9ff468579817\") " Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.987343 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-client-ca" (OuterVolumeSpecName: "client-ca") pod "33ffe8a0-473c-45e5-94c1-9ff468579817" (UID: "33ffe8a0-473c-45e5-94c1-9ff468579817"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.987441 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-config" (OuterVolumeSpecName: "config") pod "33ffe8a0-473c-45e5-94c1-9ff468579817" (UID: "33ffe8a0-473c-45e5-94c1-9ff468579817"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.992826 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ffe8a0-473c-45e5-94c1-9ff468579817-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "33ffe8a0-473c-45e5-94c1-9ff468579817" (UID: "33ffe8a0-473c-45e5-94c1-9ff468579817"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:00:08 crc kubenswrapper[4910]: I0226 22:00:08.994462 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ffe8a0-473c-45e5-94c1-9ff468579817-kube-api-access-7fv7b" (OuterVolumeSpecName: "kube-api-access-7fv7b") pod "33ffe8a0-473c-45e5-94c1-9ff468579817" (UID: "33ffe8a0-473c-45e5-94c1-9ff468579817"). InnerVolumeSpecName "kube-api-access-7fv7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.088666 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.088702 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33ffe8a0-473c-45e5-94c1-9ff468579817-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.088711 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ffe8a0-473c-45e5-94c1-9ff468579817-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.088720 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fv7b\" (UniqueName: \"kubernetes.io/projected/33ffe8a0-473c-45e5-94c1-9ff468579817-kube-api-access-7fv7b\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.342653 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.342714 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.460871 4910 generic.go:334] "Generic (PLEG): container finished" podID="33ffe8a0-473c-45e5-94c1-9ff468579817" containerID="f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb" exitCode=0 Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.460932 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.460926 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" event={"ID":"33ffe8a0-473c-45e5-94c1-9ff468579817","Type":"ContainerDied","Data":"f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb"} Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.460984 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8" event={"ID":"33ffe8a0-473c-45e5-94c1-9ff468579817","Type":"ContainerDied","Data":"804d5f3153aaee74fe091ec7e06375eb72e3c4f4ec2e5417ad97d576873552ac"} Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.461050 4910 scope.go:117] "RemoveContainer" containerID="f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.463145 4910 generic.go:334] "Generic (PLEG): container finished" podID="e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05" containerID="6b7341c3b2fc08aec3a4cbd8c76970c4c96bf68bf35afcd947d4459e75bb407c" exitCode=0 Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.463770 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535718-4rxms" event={"ID":"e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05","Type":"ContainerDied","Data":"6b7341c3b2fc08aec3a4cbd8c76970c4c96bf68bf35afcd947d4459e75bb407c"} Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.464026 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.468353 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.481204 4910 scope.go:117] "RemoveContainer" containerID="f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb" Feb 26 22:00:09 crc kubenswrapper[4910]: E0226 22:00:09.482093 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb\": container with ID starting with f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb not found: ID does not exist" containerID="f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.482133 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb"} err="failed to get container status \"f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb\": rpc error: code = NotFound desc = could not find container \"f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb\": container with ID starting with f3babdfea60c3ea766917a51f9cf82757742340f4b7236bb158c2a1e6bb30eeb not found: ID does not exist" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.483532 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.525591 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8"] Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.528522 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.528809 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bd779d8f-kdph8"] Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.751818 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.751865 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.791780 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 22:00:09 crc kubenswrapper[4910]: I0226 22:00:09.912147 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ffe8a0-473c-45e5-94c1-9ff468579817" path="/var/lib/kubelet/pods/33ffe8a0-473c-45e5-94c1-9ff468579817/volumes" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.385149 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq"] Feb 26 22:00:10 crc kubenswrapper[4910]: E0226 22:00:10.385435 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ffe8a0-473c-45e5-94c1-9ff468579817" containerName="route-controller-manager" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.385453 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ffe8a0-473c-45e5-94c1-9ff468579817" containerName="route-controller-manager" Feb 26 22:00:10 crc kubenswrapper[4910]: E0226 22:00:10.385466 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02081600-34a4-4d71-ab04-6214092a36f1" containerName="oc" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.385474 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="02081600-34a4-4d71-ab04-6214092a36f1" containerName="oc" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.385591 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ffe8a0-473c-45e5-94c1-9ff468579817" containerName="route-controller-manager" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.385607 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="02081600-34a4-4d71-ab04-6214092a36f1" containerName="oc" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.386796 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.391010 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.391316 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq"] Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.391594 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.392599 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.392708 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.392776 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.394150 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.509179 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48a0895-18de-4009-8bd4-330940486c20-serving-cert\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.509221 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-client-ca\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.509272 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-config\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.509293 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4glzd\" (UniqueName: \"kubernetes.io/projected/d48a0895-18de-4009-8bd4-330940486c20-kube-api-access-4glzd\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.515327 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.613763 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-config\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.613804 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4glzd\" (UniqueName: \"kubernetes.io/projected/d48a0895-18de-4009-8bd4-330940486c20-kube-api-access-4glzd\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.613924 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48a0895-18de-4009-8bd4-330940486c20-serving-cert\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.613947 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-client-ca\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.614834 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-config\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.615177 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-client-ca\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.619151 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48a0895-18de-4009-8bd4-330940486c20-serving-cert\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.627906 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4glzd\" (UniqueName: \"kubernetes.io/projected/d48a0895-18de-4009-8bd4-330940486c20-kube-api-access-4glzd\") pod \"route-controller-manager-56869bdf68-29scq\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:10 crc kubenswrapper[4910]: I0226 22:00:10.703514 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:11 crc kubenswrapper[4910]: I0226 22:00:11.631908 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5k6t"] Feb 26 22:00:12 crc kubenswrapper[4910]: I0226 22:00:12.479507 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5k6t" podUID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerName="registry-server" containerID="cri-o://d920bfcbc4f1189e3e01cd2b812a7b0821590d60a9e8bfa402af519b0c639c7b" gracePeriod=2 Feb 26 22:00:13 crc kubenswrapper[4910]: I0226 22:00:13.485866 4910 generic.go:334] "Generic (PLEG): container finished" podID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerID="d920bfcbc4f1189e3e01cd2b812a7b0821590d60a9e8bfa402af519b0c639c7b" exitCode=0 Feb 26 22:00:13 crc kubenswrapper[4910]: I0226 22:00:13.485914 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5k6t" event={"ID":"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3","Type":"ContainerDied","Data":"d920bfcbc4f1189e3e01cd2b812a7b0821590d60a9e8bfa402af519b0c639c7b"} Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.146830 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535718-4rxms" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.296309 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft88g\" (UniqueName: \"kubernetes.io/projected/e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05-kube-api-access-ft88g\") pod \"e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05\" (UID: \"e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05\") " Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.302678 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05-kube-api-access-ft88g" (OuterVolumeSpecName: "kube-api-access-ft88g") pod "e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05" (UID: "e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05"). InnerVolumeSpecName "kube-api-access-ft88g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.398555 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft88g\" (UniqueName: \"kubernetes.io/projected/e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05-kube-api-access-ft88g\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.435933 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.498897 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535718-4rxms" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.498886 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535718-4rxms" event={"ID":"e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05","Type":"ContainerDied","Data":"4ba9d0fb28a061391a0d90ce26c9397b2c414e088d159b5529ce95aa48d1ad85"} Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.499013 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba9d0fb28a061391a0d90ce26c9397b2c414e088d159b5529ce95aa48d1ad85" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.506639 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5k6t" event={"ID":"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3","Type":"ContainerDied","Data":"78bc151369ea55981e5d8ee09a78cf15c72ee364e98c6b92d8b9e908a052a642"} Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.506704 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5k6t" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.506708 4910 scope.go:117] "RemoveContainer" containerID="d920bfcbc4f1189e3e01cd2b812a7b0821590d60a9e8bfa402af519b0c639c7b" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.524585 4910 scope.go:117] "RemoveContainer" containerID="28443eb4ab9629ea95b8c1566d49d46a7d9559614324f36fb97d960099177f36" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.557914 4910 scope.go:117] "RemoveContainer" containerID="572f007c7788df7de4866d537eb25326180c2fc166c249f328bec52de67f7d8c" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.571292 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq"] Feb 26 22:00:15 crc kubenswrapper[4910]: W0226 22:00:15.575942 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd48a0895_18de_4009_8bd4_330940486c20.slice/crio-04a480bc17231cfe8524bdf65f67072d5e494baf383c78f6e0d272112912bfe3 WatchSource:0}: Error finding container 04a480bc17231cfe8524bdf65f67072d5e494baf383c78f6e0d272112912bfe3: Status 404 returned error can't find the container with id 04a480bc17231cfe8524bdf65f67072d5e494baf383c78f6e0d272112912bfe3 Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.600839 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-utilities\") pod \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.600884 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-catalog-content\") pod \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.600968 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwt7v\" (UniqueName: \"kubernetes.io/projected/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-kube-api-access-cwt7v\") pod \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\" (UID: \"b8a3a08a-bc54-40b0-a3c1-3da45c9003d3\") " Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.604903 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-utilities" (OuterVolumeSpecName: "utilities") pod "b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" (UID: "b8a3a08a-bc54-40b0-a3c1-3da45c9003d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.607069 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-kube-api-access-cwt7v" (OuterVolumeSpecName: "kube-api-access-cwt7v") pod "b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" (UID: "b8a3a08a-bc54-40b0-a3c1-3da45c9003d3"). InnerVolumeSpecName "kube-api-access-cwt7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.632632 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" (UID: "b8a3a08a-bc54-40b0-a3c1-3da45c9003d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.702357 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.702676 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.702687 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwt7v\" (UniqueName: \"kubernetes.io/projected/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3-kube-api-access-cwt7v\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.843157 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5k6t"] Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.846222 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5k6t"] Feb 26 22:00:15 crc kubenswrapper[4910]: I0226 22:00:15.908363 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" path="/var/lib/kubelet/pods/b8a3a08a-bc54-40b0-a3c1-3da45c9003d3/volumes" Feb 26 22:00:16 crc kubenswrapper[4910]: I0226 22:00:16.513787 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwths" event={"ID":"91f7141d-853e-4d6f-9b04-ad16b61d0dc7","Type":"ContainerStarted","Data":"9a32c0ab1b1efc9da44909eba12cfecebb982000269b345c64449b086b6f476d"} Feb 26 22:00:16 crc kubenswrapper[4910]: I0226 22:00:16.515265 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" event={"ID":"d48a0895-18de-4009-8bd4-330940486c20","Type":"ContainerStarted","Data":"29a8d9a5961646383f86ecb8a5303ce68bf18e190dc9e6263524ddd3f0f74e08"} Feb 26 22:00:16 crc kubenswrapper[4910]: I0226 22:00:16.515309 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" event={"ID":"d48a0895-18de-4009-8bd4-330940486c20","Type":"ContainerStarted","Data":"04a480bc17231cfe8524bdf65f67072d5e494baf383c78f6e0d272112912bfe3"} Feb 26 22:00:16 crc kubenswrapper[4910]: I0226 22:00:16.515488 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:16 crc kubenswrapper[4910]: I0226 22:00:16.517336 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsxdq" event={"ID":"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8","Type":"ContainerStarted","Data":"6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405"} Feb 26 22:00:16 crc kubenswrapper[4910]: I0226 22:00:16.520987 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:16 crc kubenswrapper[4910]: I0226 22:00:16.571747 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" podStartSLOduration=10.571731699 podStartE2EDuration="10.571731699s" podCreationTimestamp="2026-02-26 22:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:16.568514541 +0000 UTC m=+301.648005082" watchObservedRunningTime="2026-02-26 22:00:16.571731699 +0000 UTC m=+301.651222240" Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.525809 4910 generic.go:334] "Generic (PLEG): container finished" podID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerID="6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405" exitCode=0 Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.525889 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsxdq" event={"ID":"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8","Type":"ContainerDied","Data":"6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405"} Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.532004 4910 generic.go:334] "Generic (PLEG): container finished" podID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerID="413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855" exitCode=0 Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.532092 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2575w" event={"ID":"f51dcf73-fbc7-4a90-849c-448ed9e540f9","Type":"ContainerDied","Data":"413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855"} Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.536327 4910 generic.go:334] "Generic (PLEG): container finished" podID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerID="240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526" exitCode=0 Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.536384 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggtxj" event={"ID":"a8d202d1-b4f6-4bc1-b633-56ba90788979","Type":"ContainerDied","Data":"240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526"} Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.547647 4910 generic.go:334] "Generic (PLEG): container finished" podID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerID="0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7" exitCode=0 Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.547719 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rxsg" event={"ID":"4333e88f-8502-46f4-9639-7af62ff1e63c","Type":"ContainerDied","Data":"0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7"} Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.549965 4910 generic.go:334] "Generic (PLEG): container finished" podID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerID="d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674" exitCode=0 Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.549999 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvjfk" event={"ID":"84c1eb30-f57d-4387-bc3f-deae490cdc42","Type":"ContainerDied","Data":"d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674"} Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.552336 4910 generic.go:334] "Generic (PLEG): container finished" podID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerID="9a32c0ab1b1efc9da44909eba12cfecebb982000269b345c64449b086b6f476d" exitCode=0 Feb 26 22:00:17 crc kubenswrapper[4910]: I0226 22:00:17.552855 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwths" event={"ID":"91f7141d-853e-4d6f-9b04-ad16b61d0dc7","Type":"ContainerDied","Data":"9a32c0ab1b1efc9da44909eba12cfecebb982000269b345c64449b086b6f476d"} Feb 26 22:00:20 crc kubenswrapper[4910]: I0226 22:00:20.572203 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsxdq" event={"ID":"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8","Type":"ContainerStarted","Data":"6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7"} Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.579853 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwths" event={"ID":"91f7141d-853e-4d6f-9b04-ad16b61d0dc7","Type":"ContainerStarted","Data":"600edb3a566f1b7c7073928f2eec5801475be41cdc76abe89c11bde4e8387bfe"} Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.583100 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2575w" event={"ID":"f51dcf73-fbc7-4a90-849c-448ed9e540f9","Type":"ContainerStarted","Data":"7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb"} Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.585252 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggtxj" event={"ID":"a8d202d1-b4f6-4bc1-b633-56ba90788979","Type":"ContainerStarted","Data":"12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572"} Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.587471 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rxsg" event={"ID":"4333e88f-8502-46f4-9639-7af62ff1e63c","Type":"ContainerStarted","Data":"a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c"} Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.589928 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvjfk" event={"ID":"84c1eb30-f57d-4387-bc3f-deae490cdc42","Type":"ContainerStarted","Data":"4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11"} Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.601337 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nwths" podStartSLOduration=33.95526719 podStartE2EDuration="51.601317583s" podCreationTimestamp="2026-02-26 21:59:30 +0000 UTC" firstStartedPulling="2026-02-26 22:00:03.316520547 +0000 UTC m=+288.396011098" lastFinishedPulling="2026-02-26 22:00:20.96257095 +0000 UTC m=+306.042061491" observedRunningTime="2026-02-26 22:00:21.600536502 +0000 UTC m=+306.680027043" watchObservedRunningTime="2026-02-26 22:00:21.601317583 +0000 UTC m=+306.680808124" Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.620668 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hvjfk" podStartSLOduration=2.444956483 podStartE2EDuration="54.620648197s" podCreationTimestamp="2026-02-26 21:59:27 +0000 UTC" firstStartedPulling="2026-02-26 21:59:28.955537799 +0000 UTC m=+254.035028340" lastFinishedPulling="2026-02-26 22:00:21.131229513 +0000 UTC m=+306.210720054" observedRunningTime="2026-02-26 22:00:21.616540895 +0000 UTC m=+306.696031446" watchObservedRunningTime="2026-02-26 22:00:21.620648197 +0000 UTC m=+306.700138738" Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.642618 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6rxsg" podStartSLOduration=2.542991818 podStartE2EDuration="54.64260098s" podCreationTimestamp="2026-02-26 21:59:27 +0000 UTC" firstStartedPulling="2026-02-26 21:59:28.952346937 +0000 UTC m=+254.031837478" lastFinishedPulling="2026-02-26 22:00:21.051956089 +0000 UTC m=+306.131446640" observedRunningTime="2026-02-26 22:00:21.639534057 +0000 UTC m=+306.719024608" watchObservedRunningTime="2026-02-26 22:00:21.64260098 +0000 UTC m=+306.722091521" Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.661705 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ggtxj" podStartSLOduration=2.709641779 podStartE2EDuration="54.661684017s" podCreationTimestamp="2026-02-26 21:59:27 +0000 UTC" firstStartedPulling="2026-02-26 21:59:28.989219676 +0000 UTC m=+254.068710227" lastFinishedPulling="2026-02-26 22:00:20.941261904 +0000 UTC m=+306.020752465" observedRunningTime="2026-02-26 22:00:21.658260515 +0000 UTC m=+306.737751046" watchObservedRunningTime="2026-02-26 22:00:21.661684017 +0000 UTC m=+306.741174568" Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.690267 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2575w" podStartSLOduration=2.519241484 podStartE2EDuration="54.690252219s" podCreationTimestamp="2026-02-26 21:59:27 +0000 UTC" firstStartedPulling="2026-02-26 21:59:28.981551325 +0000 UTC m=+254.061041866" lastFinishedPulling="2026-02-26 22:00:21.15256206 +0000 UTC m=+306.232052601" observedRunningTime="2026-02-26 22:00:21.689305984 +0000 UTC m=+306.768796535" watchObservedRunningTime="2026-02-26 22:00:21.690252219 +0000 UTC m=+306.769742760" Feb 26 22:00:21 crc kubenswrapper[4910]: I0226 22:00:21.716456 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rsxdq" podStartSLOduration=35.091884656 podStartE2EDuration="51.716436668s" podCreationTimestamp="2026-02-26 21:59:30 +0000 UTC" firstStartedPulling="2026-02-26 22:00:03.318303315 +0000 UTC m=+288.397793856" lastFinishedPulling="2026-02-26 22:00:19.942855327 +0000 UTC m=+305.022345868" observedRunningTime="2026-02-26 22:00:21.712184793 +0000 UTC m=+306.791675334" watchObservedRunningTime="2026-02-26 22:00:21.716436668 +0000 UTC m=+306.795927209" Feb 26 22:00:25 crc kubenswrapper[4910]: I0226 22:00:25.727061 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:00:25 crc kubenswrapper[4910]: I0226 22:00:25.727431 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:00:25 crc kubenswrapper[4910]: I0226 22:00:25.727506 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:00:25 crc kubenswrapper[4910]: I0226 22:00:25.728129 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:00:25 crc kubenswrapper[4910]: I0226 22:00:25.728207 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b" gracePeriod=600 Feb 26 22:00:26 crc kubenswrapper[4910]: I0226 22:00:26.343000 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9"] Feb 26 22:00:26 crc kubenswrapper[4910]: I0226 22:00:26.343254 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" podUID="962a42b1-d418-4ffb-97a2-88ee898dba75" containerName="controller-manager" containerID="cri-o://456038e26d7386a772f38b4795dba7fedbcdde6dd0895dd463be136f3714347f" gracePeriod=30 Feb 26 22:00:26 crc kubenswrapper[4910]: I0226 22:00:26.356211 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq"] Feb 26 22:00:26 crc kubenswrapper[4910]: I0226 22:00:26.356713 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" podUID="d48a0895-18de-4009-8bd4-330940486c20" containerName="route-controller-manager" containerID="cri-o://29a8d9a5961646383f86ecb8a5303ce68bf18e190dc9e6263524ddd3f0f74e08" gracePeriod=30 Feb 26 22:00:26 crc kubenswrapper[4910]: I0226 22:00:26.620208 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b" exitCode=0 Feb 26 22:00:26 crc kubenswrapper[4910]: I0226 22:00:26.620255 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b"} Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.409219 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.409586 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.458698 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.573890 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ggtxj" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.574335 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ggtxj" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.619103 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ggtxj" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.642638 4910 generic.go:334] "Generic (PLEG): container finished" podID="962a42b1-d418-4ffb-97a2-88ee898dba75" containerID="456038e26d7386a772f38b4795dba7fedbcdde6dd0895dd463be136f3714347f" exitCode=0 Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.642723 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" event={"ID":"962a42b1-d418-4ffb-97a2-88ee898dba75","Type":"ContainerDied","Data":"456038e26d7386a772f38b4795dba7fedbcdde6dd0895dd463be136f3714347f"} Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.646274 4910 generic.go:334] "Generic (PLEG): container finished" podID="d48a0895-18de-4009-8bd4-330940486c20" containerID="29a8d9a5961646383f86ecb8a5303ce68bf18e190dc9e6263524ddd3f0f74e08" exitCode=0 Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.646743 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" event={"ID":"d48a0895-18de-4009-8bd4-330940486c20","Type":"ContainerDied","Data":"29a8d9a5961646383f86ecb8a5303ce68bf18e190dc9e6263524ddd3f0f74e08"} Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.699583 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ggtxj" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.700064 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.739389 4910 patch_prober.go:28] interesting pod/controller-manager-7f96dfc5f5-6glx9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.739468 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" podUID="962a42b1-d418-4ffb-97a2-88ee898dba75" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.759670 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2575w" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.759738 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2575w" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.805582 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2575w" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.960914 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hvjfk" Feb 26 22:00:27 crc kubenswrapper[4910]: I0226 22:00:27.961759 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hvjfk" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.029116 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hvjfk" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.524587 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.531151 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.566895 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt"] Feb 26 22:00:28 crc kubenswrapper[4910]: E0226 22:00:28.567253 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48a0895-18de-4009-8bd4-330940486c20" containerName="route-controller-manager" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567276 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48a0895-18de-4009-8bd4-330940486c20" containerName="route-controller-manager" Feb 26 22:00:28 crc kubenswrapper[4910]: E0226 22:00:28.567300 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962a42b1-d418-4ffb-97a2-88ee898dba75" containerName="controller-manager" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567314 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="962a42b1-d418-4ffb-97a2-88ee898dba75" containerName="controller-manager" Feb 26 22:00:28 crc kubenswrapper[4910]: E0226 22:00:28.567329 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerName="registry-server" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567340 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerName="registry-server" Feb 26 22:00:28 crc kubenswrapper[4910]: E0226 22:00:28.567364 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerName="extract-content" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567377 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerName="extract-content" Feb 26 22:00:28 crc kubenswrapper[4910]: E0226 22:00:28.567397 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerName="extract-utilities" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567409 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerName="extract-utilities" Feb 26 22:00:28 crc kubenswrapper[4910]: E0226 22:00:28.567423 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05" containerName="oc" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567435 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05" containerName="oc" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567606 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a3a08a-bc54-40b0-a3c1-3da45c9003d3" containerName="registry-server" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567639 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="962a42b1-d418-4ffb-97a2-88ee898dba75" containerName="controller-manager" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567666 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48a0895-18de-4009-8bd4-330940486c20" containerName="route-controller-manager" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.567687 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05" containerName="oc" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.568409 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.606182 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt"] Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.653869 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" event={"ID":"d48a0895-18de-4009-8bd4-330940486c20","Type":"ContainerDied","Data":"04a480bc17231cfe8524bdf65f67072d5e494baf383c78f6e0d272112912bfe3"} Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.653924 4910 scope.go:117] "RemoveContainer" containerID="29a8d9a5961646383f86ecb8a5303ce68bf18e190dc9e6263524ddd3f0f74e08" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.654050 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.656745 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" event={"ID":"962a42b1-d418-4ffb-97a2-88ee898dba75","Type":"ContainerDied","Data":"45e4d537d28db432b90353d99caaef341a82b7413e849321419c848a3de5310f"} Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.656835 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.660597 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"64124dfbd3fd0964011ae7c39d92177145f45ba34946932fdb21e2ba093e20f6"} Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.672194 4910 scope.go:117] "RemoveContainer" containerID="456038e26d7386a772f38b4795dba7fedbcdde6dd0895dd463be136f3714347f" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.696694 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/962a42b1-d418-4ffb-97a2-88ee898dba75-serving-cert\") pod \"962a42b1-d418-4ffb-97a2-88ee898dba75\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.696768 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-config\") pod \"d48a0895-18de-4009-8bd4-330940486c20\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.696801 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-proxy-ca-bundles\") pod \"962a42b1-d418-4ffb-97a2-88ee898dba75\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.696826 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-446hw\" (UniqueName: \"kubernetes.io/projected/962a42b1-d418-4ffb-97a2-88ee898dba75-kube-api-access-446hw\") pod \"962a42b1-d418-4ffb-97a2-88ee898dba75\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.696914 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-client-ca\") pod \"962a42b1-d418-4ffb-97a2-88ee898dba75\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.696938 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-config\") pod \"962a42b1-d418-4ffb-97a2-88ee898dba75\" (UID: \"962a42b1-d418-4ffb-97a2-88ee898dba75\") " Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.696972 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-client-ca\") pod \"d48a0895-18de-4009-8bd4-330940486c20\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.697001 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4glzd\" (UniqueName: \"kubernetes.io/projected/d48a0895-18de-4009-8bd4-330940486c20-kube-api-access-4glzd\") pod \"d48a0895-18de-4009-8bd4-330940486c20\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.697023 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48a0895-18de-4009-8bd4-330940486c20-serving-cert\") pod \"d48a0895-18de-4009-8bd4-330940486c20\" (UID: \"d48a0895-18de-4009-8bd4-330940486c20\") " Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.697236 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52mmw\" (UniqueName: \"kubernetes.io/projected/ff0bd3cc-237d-4953-9a02-8b479f59b01b-kube-api-access-52mmw\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.697303 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0bd3cc-237d-4953-9a02-8b479f59b01b-serving-cert\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.697350 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-config\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.697377 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-client-ca\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.697383 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2575w" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.698574 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-config" (OuterVolumeSpecName: "config") pod "d48a0895-18de-4009-8bd4-330940486c20" (UID: "d48a0895-18de-4009-8bd4-330940486c20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.698636 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-client-ca" (OuterVolumeSpecName: "client-ca") pod "d48a0895-18de-4009-8bd4-330940486c20" (UID: "d48a0895-18de-4009-8bd4-330940486c20"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.698817 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-client-ca" (OuterVolumeSpecName: "client-ca") pod "962a42b1-d418-4ffb-97a2-88ee898dba75" (UID: "962a42b1-d418-4ffb-97a2-88ee898dba75"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.698872 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "962a42b1-d418-4ffb-97a2-88ee898dba75" (UID: "962a42b1-d418-4ffb-97a2-88ee898dba75"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.698929 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-config" (OuterVolumeSpecName: "config") pod "962a42b1-d418-4ffb-97a2-88ee898dba75" (UID: "962a42b1-d418-4ffb-97a2-88ee898dba75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.702581 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d48a0895-18de-4009-8bd4-330940486c20-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d48a0895-18de-4009-8bd4-330940486c20" (UID: "d48a0895-18de-4009-8bd4-330940486c20"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.704097 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962a42b1-d418-4ffb-97a2-88ee898dba75-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "962a42b1-d418-4ffb-97a2-88ee898dba75" (UID: "962a42b1-d418-4ffb-97a2-88ee898dba75"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.704643 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962a42b1-d418-4ffb-97a2-88ee898dba75-kube-api-access-446hw" (OuterVolumeSpecName: "kube-api-access-446hw") pod "962a42b1-d418-4ffb-97a2-88ee898dba75" (UID: "962a42b1-d418-4ffb-97a2-88ee898dba75"). InnerVolumeSpecName "kube-api-access-446hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.707652 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48a0895-18de-4009-8bd4-330940486c20-kube-api-access-4glzd" (OuterVolumeSpecName: "kube-api-access-4glzd") pod "d48a0895-18de-4009-8bd4-330940486c20" (UID: "d48a0895-18de-4009-8bd4-330940486c20"). InnerVolumeSpecName "kube-api-access-4glzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.711703 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hvjfk" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.798766 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52mmw\" (UniqueName: \"kubernetes.io/projected/ff0bd3cc-237d-4953-9a02-8b479f59b01b-kube-api-access-52mmw\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.798899 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0bd3cc-237d-4953-9a02-8b479f59b01b-serving-cert\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.798966 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-config\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799001 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-client-ca\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799094 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799109 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799123 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-446hw\" (UniqueName: \"kubernetes.io/projected/962a42b1-d418-4ffb-97a2-88ee898dba75-kube-api-access-446hw\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799134 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799144 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962a42b1-d418-4ffb-97a2-88ee898dba75-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799173 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48a0895-18de-4009-8bd4-330940486c20-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799184 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4glzd\" (UniqueName: \"kubernetes.io/projected/d48a0895-18de-4009-8bd4-330940486c20-kube-api-access-4glzd\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799195 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48a0895-18de-4009-8bd4-330940486c20-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.799205 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/962a42b1-d418-4ffb-97a2-88ee898dba75-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.801122 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-client-ca\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.802306 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0bd3cc-237d-4953-9a02-8b479f59b01b-serving-cert\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.802892 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-config\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.815957 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52mmw\" (UniqueName: \"kubernetes.io/projected/ff0bd3cc-237d-4953-9a02-8b479f59b01b-kube-api-access-52mmw\") pod \"route-controller-manager-6cdb564b9d-cvkvt\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.919276 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:28 crc kubenswrapper[4910]: I0226 22:00:28.994879 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq"] Feb 26 22:00:29 crc kubenswrapper[4910]: I0226 22:00:29.005339 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56869bdf68-29scq"] Feb 26 22:00:29 crc kubenswrapper[4910]: I0226 22:00:29.019290 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9"] Feb 26 22:00:29 crc kubenswrapper[4910]: I0226 22:00:29.020894 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f96dfc5f5-6glx9"] Feb 26 22:00:29 crc kubenswrapper[4910]: I0226 22:00:29.139017 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt"] Feb 26 22:00:29 crc kubenswrapper[4910]: I0226 22:00:29.667672 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" event={"ID":"ff0bd3cc-237d-4953-9a02-8b479f59b01b","Type":"ContainerStarted","Data":"d01bdb6bdae5a6c110d3944472573e7970ec5fed5faa45170ad68bef7bcf39e1"} Feb 26 22:00:29 crc kubenswrapper[4910]: I0226 22:00:29.906745 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962a42b1-d418-4ffb-97a2-88ee898dba75" path="/var/lib/kubelet/pods/962a42b1-d418-4ffb-97a2-88ee898dba75/volumes" Feb 26 22:00:29 crc kubenswrapper[4910]: I0226 22:00:29.907782 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48a0895-18de-4009-8bd4-330940486c20" path="/var/lib/kubelet/pods/d48a0895-18de-4009-8bd4-330940486c20/volumes" Feb 26 22:00:30 crc kubenswrapper[4910]: I0226 22:00:30.026313 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvjfk"] Feb 26 22:00:30 crc kubenswrapper[4910]: I0226 22:00:30.229746 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2575w"] Feb 26 22:00:30 crc kubenswrapper[4910]: I0226 22:00:30.680073 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" event={"ID":"ff0bd3cc-237d-4953-9a02-8b479f59b01b","Type":"ContainerStarted","Data":"fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0"} Feb 26 22:00:30 crc kubenswrapper[4910]: I0226 22:00:30.680580 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2575w" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerName="registry-server" containerID="cri-o://7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb" gracePeriod=2 Feb 26 22:00:30 crc kubenswrapper[4910]: I0226 22:00:30.740418 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" podStartSLOduration=4.740404027 podStartE2EDuration="4.740404027s" podCreationTimestamp="2026-02-26 22:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:30.734269121 +0000 UTC m=+315.813759672" watchObservedRunningTime="2026-02-26 22:00:30.740404027 +0000 UTC m=+315.819894568" Feb 26 22:00:30 crc kubenswrapper[4910]: I0226 22:00:30.741931 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nwths" Feb 26 22:00:30 crc kubenswrapper[4910]: I0226 22:00:30.741979 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nwths" Feb 26 22:00:30 crc kubenswrapper[4910]: I0226 22:00:30.819795 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nwths" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.089418 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2575w" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.153210 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.153284 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.205315 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.233038 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-utilities\") pod \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.233814 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-utilities" (OuterVolumeSpecName: "utilities") pod "f51dcf73-fbc7-4a90-849c-448ed9e540f9" (UID: "f51dcf73-fbc7-4a90-849c-448ed9e540f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.234084 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xs9j\" (UniqueName: \"kubernetes.io/projected/f51dcf73-fbc7-4a90-849c-448ed9e540f9-kube-api-access-5xs9j\") pod \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.234212 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-catalog-content\") pod \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\" (UID: \"f51dcf73-fbc7-4a90-849c-448ed9e540f9\") " Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.234500 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.239316 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51dcf73-fbc7-4a90-849c-448ed9e540f9-kube-api-access-5xs9j" (OuterVolumeSpecName: "kube-api-access-5xs9j") pod "f51dcf73-fbc7-4a90-849c-448ed9e540f9" (UID: "f51dcf73-fbc7-4a90-849c-448ed9e540f9"). InnerVolumeSpecName "kube-api-access-5xs9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.319696 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f51dcf73-fbc7-4a90-849c-448ed9e540f9" (UID: "f51dcf73-fbc7-4a90-849c-448ed9e540f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.336040 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xs9j\" (UniqueName: \"kubernetes.io/projected/f51dcf73-fbc7-4a90-849c-448ed9e540f9-kube-api-access-5xs9j\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.336103 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51dcf73-fbc7-4a90-849c-448ed9e540f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.402694 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c45c9bfb9-x48jj"] Feb 26 22:00:31 crc kubenswrapper[4910]: E0226 22:00:31.403063 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerName="extract-utilities" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.403093 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerName="extract-utilities" Feb 26 22:00:31 crc kubenswrapper[4910]: E0226 22:00:31.403114 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerName="extract-content" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.403127 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerName="extract-content" Feb 26 22:00:31 crc kubenswrapper[4910]: E0226 22:00:31.403141 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerName="registry-server" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.403188 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerName="registry-server" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.403427 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerName="registry-server" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.404045 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.411847 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.412333 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.412474 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.412588 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.412975 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.413697 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.420673 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.420670 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c45c9bfb9-x48jj"] Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.538104 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-config\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.538279 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp9dr\" (UniqueName: \"kubernetes.io/projected/1f794d15-85f3-4ba3-b722-5d84e523e33a-kube-api-access-lp9dr\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.538422 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f794d15-85f3-4ba3-b722-5d84e523e33a-serving-cert\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.538471 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-client-ca\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.538564 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-proxy-ca-bundles\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.639895 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp9dr\" (UniqueName: \"kubernetes.io/projected/1f794d15-85f3-4ba3-b722-5d84e523e33a-kube-api-access-lp9dr\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.639961 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f794d15-85f3-4ba3-b722-5d84e523e33a-serving-cert\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.639982 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-client-ca\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.640001 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-proxy-ca-bundles\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.640098 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-config\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.642241 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-client-ca\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.643626 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-proxy-ca-bundles\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.643754 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-config\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.646578 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f794d15-85f3-4ba3-b722-5d84e523e33a-serving-cert\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.659678 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp9dr\" (UniqueName: \"kubernetes.io/projected/1f794d15-85f3-4ba3-b722-5d84e523e33a-kube-api-access-lp9dr\") pod \"controller-manager-c45c9bfb9-x48jj\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.693320 4910 generic.go:334] "Generic (PLEG): container finished" podID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" containerID="7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb" exitCode=0 Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.693502 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2575w" event={"ID":"f51dcf73-fbc7-4a90-849c-448ed9e540f9","Type":"ContainerDied","Data":"7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb"} Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.693610 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2575w" event={"ID":"f51dcf73-fbc7-4a90-849c-448ed9e540f9","Type":"ContainerDied","Data":"16fdf5e6405c773cd198c50c65615aacf557b7570fde2d8c6630cc6334a4ee03"} Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.693648 4910 scope.go:117] "RemoveContainer" containerID="7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.693703 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hvjfk" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerName="registry-server" containerID="cri-o://4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11" gracePeriod=2 Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.695767 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.696038 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2575w" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.703025 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.719853 4910 scope.go:117] "RemoveContainer" containerID="413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.745388 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.751745 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2575w"] Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.757108 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2575w"] Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.770459 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.778363 4910 scope.go:117] "RemoveContainer" containerID="1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.788293 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nwths" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.804045 4910 scope.go:117] "RemoveContainer" containerID="7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb" Feb 26 22:00:31 crc kubenswrapper[4910]: E0226 22:00:31.804484 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb\": container with ID starting with 7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb not found: ID does not exist" containerID="7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.804522 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb"} err="failed to get container status \"7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb\": rpc error: code = NotFound desc = could not find container \"7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb\": container with ID starting with 7e057eff98fd5ed973462924b24cc3e8e21b5a9a648dbf0dab4e0f9dae9fc7fb not found: ID does not exist" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.804547 4910 scope.go:117] "RemoveContainer" containerID="413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855" Feb 26 22:00:31 crc kubenswrapper[4910]: E0226 22:00:31.805189 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855\": container with ID starting with 413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855 not found: ID does not exist" containerID="413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.805231 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855"} err="failed to get container status \"413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855\": rpc error: code = NotFound desc = could not find container \"413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855\": container with ID starting with 413c147fd221105ed62966f3c87463820b9df31264ff5cba919112e3c08e1855 not found: ID does not exist" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.805261 4910 scope.go:117] "RemoveContainer" containerID="1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d" Feb 26 22:00:31 crc kubenswrapper[4910]: E0226 22:00:31.805667 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d\": container with ID starting with 1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d not found: ID does not exist" containerID="1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.805701 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d"} err="failed to get container status \"1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d\": rpc error: code = NotFound desc = could not find container \"1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d\": container with ID starting with 1d4e86242deaf04a5b239963c9738b992b2d622834721f9c9ec5eaf0dff3376d not found: ID does not exist" Feb 26 22:00:31 crc kubenswrapper[4910]: I0226 22:00:31.908931 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51dcf73-fbc7-4a90-849c-448ed9e540f9" path="/var/lib/kubelet/pods/f51dcf73-fbc7-4a90-849c-448ed9e540f9/volumes" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.130011 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvjfk" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.184452 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c45c9bfb9-x48jj"] Feb 26 22:00:32 crc kubenswrapper[4910]: W0226 22:00:32.195229 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f794d15_85f3_4ba3_b722_5d84e523e33a.slice/crio-0e06c74ea37ae970a1ec34e3c89165a1d4603d7407f38f156638af3adb552e48 WatchSource:0}: Error finding container 0e06c74ea37ae970a1ec34e3c89165a1d4603d7407f38f156638af3adb552e48: Status 404 returned error can't find the container with id 0e06c74ea37ae970a1ec34e3c89165a1d4603d7407f38f156638af3adb552e48 Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.251647 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv27g\" (UniqueName: \"kubernetes.io/projected/84c1eb30-f57d-4387-bc3f-deae490cdc42-kube-api-access-rv27g\") pod \"84c1eb30-f57d-4387-bc3f-deae490cdc42\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.251748 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-utilities\") pod \"84c1eb30-f57d-4387-bc3f-deae490cdc42\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.251788 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-catalog-content\") pod \"84c1eb30-f57d-4387-bc3f-deae490cdc42\" (UID: \"84c1eb30-f57d-4387-bc3f-deae490cdc42\") " Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.252764 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-utilities" (OuterVolumeSpecName: "utilities") pod "84c1eb30-f57d-4387-bc3f-deae490cdc42" (UID: "84c1eb30-f57d-4387-bc3f-deae490cdc42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.257041 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c1eb30-f57d-4387-bc3f-deae490cdc42-kube-api-access-rv27g" (OuterVolumeSpecName: "kube-api-access-rv27g") pod "84c1eb30-f57d-4387-bc3f-deae490cdc42" (UID: "84c1eb30-f57d-4387-bc3f-deae490cdc42"). InnerVolumeSpecName "kube-api-access-rv27g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.302271 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84c1eb30-f57d-4387-bc3f-deae490cdc42" (UID: "84c1eb30-f57d-4387-bc3f-deae490cdc42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.353227 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.353287 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c1eb30-f57d-4387-bc3f-deae490cdc42-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.353308 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv27g\" (UniqueName: \"kubernetes.io/projected/84c1eb30-f57d-4387-bc3f-deae490cdc42-kube-api-access-rv27g\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.699689 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" event={"ID":"1f794d15-85f3-4ba3-b722-5d84e523e33a","Type":"ContainerStarted","Data":"60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6"} Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.699731 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" event={"ID":"1f794d15-85f3-4ba3-b722-5d84e523e33a","Type":"ContainerStarted","Data":"0e06c74ea37ae970a1ec34e3c89165a1d4603d7407f38f156638af3adb552e48"} Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.699941 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.703202 4910 generic.go:334] "Generic (PLEG): container finished" podID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerID="4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11" exitCode=0 Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.703252 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvjfk" event={"ID":"84c1eb30-f57d-4387-bc3f-deae490cdc42","Type":"ContainerDied","Data":"4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11"} Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.703296 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvjfk" event={"ID":"84c1eb30-f57d-4387-bc3f-deae490cdc42","Type":"ContainerDied","Data":"c23e35e9beec3bf1cb1c7131e3ac24cf78ba9e46208494904a344d9dcccb71f8"} Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.703318 4910 scope.go:117] "RemoveContainer" containerID="4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.703262 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvjfk" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.706182 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.717611 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" podStartSLOduration=6.717590967 podStartE2EDuration="6.717590967s" podCreationTimestamp="2026-02-26 22:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:00:32.713022464 +0000 UTC m=+317.792513025" watchObservedRunningTime="2026-02-26 22:00:32.717590967 +0000 UTC m=+317.797081498" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.724305 4910 scope.go:117] "RemoveContainer" containerID="d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.763806 4910 scope.go:117] "RemoveContainer" containerID="c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.777629 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvjfk"] Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.780857 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hvjfk"] Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.791037 4910 scope.go:117] "RemoveContainer" containerID="4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11" Feb 26 22:00:32 crc kubenswrapper[4910]: E0226 22:00:32.791547 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11\": container with ID starting with 4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11 not found: ID does not exist" containerID="4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.791590 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11"} err="failed to get container status \"4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11\": rpc error: code = NotFound desc = could not find container \"4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11\": container with ID starting with 4e30195ec33c432c4d2440f053d6a86ede742a12a6852c231de09dfd62f77c11 not found: ID does not exist" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.791617 4910 scope.go:117] "RemoveContainer" containerID="d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674" Feb 26 22:00:32 crc kubenswrapper[4910]: E0226 22:00:32.797505 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674\": container with ID starting with d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674 not found: ID does not exist" containerID="d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.797723 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674"} err="failed to get container status \"d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674\": rpc error: code = NotFound desc = could not find container \"d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674\": container with ID starting with d35971df923e267241ee3732f4621fe6228402a782b7a739f1d8ce51c946b674 not found: ID does not exist" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.797741 4910 scope.go:117] "RemoveContainer" containerID="c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037" Feb 26 22:00:32 crc kubenswrapper[4910]: E0226 22:00:32.803746 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037\": container with ID starting with c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037 not found: ID does not exist" containerID="c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037" Feb 26 22:00:32 crc kubenswrapper[4910]: I0226 22:00:32.803770 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037"} err="failed to get container status \"c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037\": rpc error: code = NotFound desc = could not find container \"c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037\": container with ID starting with c6a83ab5b5dfa996ba303baec18a016d1c0905239e812d8c95e5e934d344b037 not found: ID does not exist" Feb 26 22:00:33 crc kubenswrapper[4910]: I0226 22:00:33.911873 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" path="/var/lib/kubelet/pods/84c1eb30-f57d-4387-bc3f-deae490cdc42/volumes" Feb 26 22:00:34 crc kubenswrapper[4910]: I0226 22:00:34.631667 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rsxdq"] Feb 26 22:00:34 crc kubenswrapper[4910]: I0226 22:00:34.632060 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rsxdq" podUID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerName="registry-server" containerID="cri-o://6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7" gracePeriod=2 Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.141940 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.198975 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scs7d\" (UniqueName: \"kubernetes.io/projected/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-kube-api-access-scs7d\") pod \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.199028 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-catalog-content\") pod \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.199054 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-utilities\") pod \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\" (UID: \"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8\") " Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.200770 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-utilities" (OuterVolumeSpecName: "utilities") pod "ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" (UID: "ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.208900 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-kube-api-access-scs7d" (OuterVolumeSpecName: "kube-api-access-scs7d") pod "ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" (UID: "ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8"). InnerVolumeSpecName "kube-api-access-scs7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.300873 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scs7d\" (UniqueName: \"kubernetes.io/projected/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-kube-api-access-scs7d\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.300938 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.437957 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" (UID: "ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.504318 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.732657 4910 generic.go:334] "Generic (PLEG): container finished" podID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerID="6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7" exitCode=0 Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.732714 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsxdq" event={"ID":"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8","Type":"ContainerDied","Data":"6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7"} Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.732781 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsxdq" event={"ID":"ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8","Type":"ContainerDied","Data":"79cef861a01a640e6da5e1d936c6e8f72778b88d802efe736f5aa45460befbc1"} Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.732815 4910 scope.go:117] "RemoveContainer" containerID="6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.732821 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsxdq" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.763759 4910 scope.go:117] "RemoveContainer" containerID="6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.793523 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rsxdq"] Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.819982 4910 scope.go:117] "RemoveContainer" containerID="0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.820354 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rsxdq"] Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.840089 4910 scope.go:117] "RemoveContainer" containerID="6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7" Feb 26 22:00:35 crc kubenswrapper[4910]: E0226 22:00:35.842455 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7\": container with ID starting with 6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7 not found: ID does not exist" containerID="6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.842513 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7"} err="failed to get container status \"6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7\": rpc error: code = NotFound desc = could not find container \"6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7\": container with ID starting with 6124711e30484bb557ed71d90db751ba66cd33d1877c877abe5cb962ded741a7 not found: ID does not exist" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.842555 4910 scope.go:117] "RemoveContainer" containerID="6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405" Feb 26 22:00:35 crc kubenswrapper[4910]: E0226 22:00:35.843193 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405\": container with ID starting with 6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405 not found: ID does not exist" containerID="6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.843233 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405"} err="failed to get container status \"6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405\": rpc error: code = NotFound desc = could not find container \"6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405\": container with ID starting with 6c00d9e80f4805bd279716e6e27ed71829531c1f0087950dbdd4558bbd63a405 not found: ID does not exist" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.843257 4910 scope.go:117] "RemoveContainer" containerID="0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba" Feb 26 22:00:35 crc kubenswrapper[4910]: E0226 22:00:35.843585 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba\": container with ID starting with 0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba not found: ID does not exist" containerID="0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.843625 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba"} err="failed to get container status \"0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba\": rpc error: code = NotFound desc = could not find container \"0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba\": container with ID starting with 0d6b3c98186616c921106981d012428ed8bdc098fa5f38d011d4b261a366c0ba not found: ID does not exist" Feb 26 22:00:35 crc kubenswrapper[4910]: I0226 22:00:35.912939 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" path="/var/lib/kubelet/pods/ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8/volumes" Feb 26 22:00:40 crc kubenswrapper[4910]: I0226 22:00:40.251437 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dgm55"] Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.512266 4910 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.513100 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerName="registry-server" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513121 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerName="registry-server" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.513148 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerName="extract-utilities" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513182 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerName="extract-utilities" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.513201 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerName="registry-server" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513212 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerName="registry-server" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.513231 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerName="extract-content" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513241 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerName="extract-content" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.513265 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerName="extract-content" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513276 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerName="extract-content" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.513291 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerName="extract-utilities" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513301 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerName="extract-utilities" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513446 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c1eb30-f57d-4387-bc3f-deae490cdc42" containerName="registry-server" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513471 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec415ba3-1e2f-4ca2-8137-0472c5ca1ea8" containerName="registry-server" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513934 4910 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.513969 4910 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514180 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.514294 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514318 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.514336 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514347 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.514361 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514372 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.514387 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514399 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.514414 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514423 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.514436 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514447 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.514466 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514476 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.514492 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514502 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.514985 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515009 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515017 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515025 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515039 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515046 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515052 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515059 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515067 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.515149 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515172 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.515182 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515188 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515419 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66" gracePeriod=15 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515471 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4" gracePeriod=15 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515522 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f" gracePeriod=15 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515575 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2" gracePeriod=15 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.515581 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec" gracePeriod=15 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.519100 4910 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.549190 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.564544 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.666025 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.666087 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.666113 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.666175 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.666236 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.666272 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.666292 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.666320 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.666429 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767169 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767483 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767504 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767274 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767550 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767575 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767592 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767593 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767608 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767627 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767607 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767662 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767681 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.767705 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.846520 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.847383 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.847988 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.848916 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f" exitCode=0 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.848936 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4" exitCode=0 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.848945 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec" exitCode=0 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.848953 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2" exitCode=2 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.849011 4910 scope.go:117] "RemoveContainer" containerID="549802644a66a1b96f4f5634c1161eeefb977ddcb9122d817c1ccf148f6f078a" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.852599 4910 generic.go:334] "Generic (PLEG): container finished" podID="ddb0f143-e336-4b54-a769-47390935e034" containerID="74254b6e477360573b7393cb9995fc4fe752b558e13cd9d3198153f3f616d123" exitCode=0 Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.852627 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ddb0f143-e336-4b54-a769-47390935e034","Type":"ContainerDied","Data":"74254b6e477360573b7393cb9995fc4fe752b558e13cd9d3198153f3f616d123"} Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.853256 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.853701 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:45 crc kubenswrapper[4910]: W0226 22:00:45.882372 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b4ee58c3bc0765e5ae6ea137afb42e5f82b3f9f9b7afeeb755f5574bf87adc84 WatchSource:0}: Error finding container b4ee58c3bc0765e5ae6ea137afb42e5f82b3f9f9b7afeeb755f5574bf87adc84: Status 404 returned error can't find the container with id b4ee58c3bc0765e5ae6ea137afb42e5f82b3f9f9b7afeeb755f5574bf87adc84 Feb 26 22:00:45 crc kubenswrapper[4910]: E0226 22:00:45.888102 4910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897eae4963d2056 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 22:00:45.887610966 +0000 UTC m=+330.967101507,LastTimestamp:2026-02-26 22:00:45.887610966 +0000 UTC m=+330.967101507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.905279 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:45 crc kubenswrapper[4910]: I0226 22:00:45.905690 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:46 crc kubenswrapper[4910]: E0226 22:00:46.142882 4910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897eae4963d2056 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 22:00:45.887610966 +0000 UTC m=+330.967101507,LastTimestamp:2026-02-26 22:00:45.887610966 +0000 UTC m=+330.967101507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 22:00:46 crc kubenswrapper[4910]: I0226 22:00:46.861406 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b6bc19b116902a7f9f3a2149637da68a961e4f49fe980796c3aea82835548a48"} Feb 26 22:00:46 crc kubenswrapper[4910]: I0226 22:00:46.861471 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b4ee58c3bc0765e5ae6ea137afb42e5f82b3f9f9b7afeeb755f5574bf87adc84"} Feb 26 22:00:46 crc kubenswrapper[4910]: I0226 22:00:46.862302 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:46 crc kubenswrapper[4910]: I0226 22:00:46.862676 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:46 crc kubenswrapper[4910]: I0226 22:00:46.865549 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.311076 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.311999 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.312505 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.492001 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb0f143-e336-4b54-a769-47390935e034-kube-api-access\") pod \"ddb0f143-e336-4b54-a769-47390935e034\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.492095 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-kubelet-dir\") pod \"ddb0f143-e336-4b54-a769-47390935e034\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.492243 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-var-lock\") pod \"ddb0f143-e336-4b54-a769-47390935e034\" (UID: \"ddb0f143-e336-4b54-a769-47390935e034\") " Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.492308 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ddb0f143-e336-4b54-a769-47390935e034" (UID: "ddb0f143-e336-4b54-a769-47390935e034"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.492305 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-var-lock" (OuterVolumeSpecName: "var-lock") pod "ddb0f143-e336-4b54-a769-47390935e034" (UID: "ddb0f143-e336-4b54-a769-47390935e034"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.492730 4910 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.492777 4910 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddb0f143-e336-4b54-a769-47390935e034-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.500899 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb0f143-e336-4b54-a769-47390935e034-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ddb0f143-e336-4b54-a769-47390935e034" (UID: "ddb0f143-e336-4b54-a769-47390935e034"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.594050 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddb0f143-e336-4b54-a769-47390935e034-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.876105 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.876489 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ddb0f143-e336-4b54-a769-47390935e034","Type":"ContainerDied","Data":"e8eec41f1f6c567c6f153ae34e39234c06165626a5c86403f657c56cae53345e"} Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.876570 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8eec41f1f6c567c6f153ae34e39234c06165626a5c86403f657c56cae53345e" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.982587 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.983128 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.989437 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.990806 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.991479 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.992010 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:47 crc kubenswrapper[4910]: I0226 22:00:47.992637 4910 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.113563 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.113636 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.113924 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.114037 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.114181 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.114358 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.114627 4910 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.114862 4910 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.114952 4910 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:00:48 crc kubenswrapper[4910]: E0226 22:00:48.478380 4910 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:48 crc kubenswrapper[4910]: E0226 22:00:48.479020 4910 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:48 crc kubenswrapper[4910]: E0226 22:00:48.479672 4910 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:48 crc kubenswrapper[4910]: E0226 22:00:48.480207 4910 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:48 crc kubenswrapper[4910]: E0226 22:00:48.480721 4910 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.480800 4910 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 26 22:00:48 crc kubenswrapper[4910]: E0226 22:00:48.481279 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="200ms" Feb 26 22:00:48 crc kubenswrapper[4910]: E0226 22:00:48.682080 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="400ms" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.888118 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.890305 4910 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66" exitCode=0 Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.890413 4910 scope.go:117] "RemoveContainer" containerID="086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.890676 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.916567 4910 scope.go:117] "RemoveContainer" containerID="d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.924393 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.925693 4910 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.926106 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.933520 4910 scope.go:117] "RemoveContainer" containerID="336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.950522 4910 scope.go:117] "RemoveContainer" containerID="914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.972473 4910 scope.go:117] "RemoveContainer" containerID="19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66" Feb 26 22:00:48 crc kubenswrapper[4910]: I0226 22:00:48.992563 4910 scope.go:117] "RemoveContainer" containerID="14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.025396 4910 scope.go:117] "RemoveContainer" containerID="086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f" Feb 26 22:00:49 crc kubenswrapper[4910]: E0226 22:00:49.027811 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\": container with ID starting with 086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f not found: ID does not exist" containerID="086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.027880 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f"} err="failed to get container status \"086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\": rpc error: code = NotFound desc = could not find container \"086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f\": container with ID starting with 086d3f65040196d508bd56f26b70507361b8004610cd1b7a2371de012293163f not found: ID does not exist" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.027919 4910 scope.go:117] "RemoveContainer" containerID="d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4" Feb 26 22:00:49 crc kubenswrapper[4910]: E0226 22:00:49.028275 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\": container with ID starting with d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4 not found: ID does not exist" containerID="d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.028302 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4"} err="failed to get container status \"d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\": rpc error: code = NotFound desc = could not find container \"d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4\": container with ID starting with d5d8dee19bbe7569ca9c2c494b4a5f8e2dc21b88658ba59af8155bc6e5f444f4 not found: ID does not exist" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.028318 4910 scope.go:117] "RemoveContainer" containerID="336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec" Feb 26 22:00:49 crc kubenswrapper[4910]: E0226 22:00:49.028705 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\": container with ID starting with 336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec not found: ID does not exist" containerID="336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.028730 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec"} err="failed to get container status \"336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\": rpc error: code = NotFound desc = could not find container \"336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec\": container with ID starting with 336403d713fd6ceeaccd284277554476e2ba085a574dceef15fa51e9d9a35fec not found: ID does not exist" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.028748 4910 scope.go:117] "RemoveContainer" containerID="914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2" Feb 26 22:00:49 crc kubenswrapper[4910]: E0226 22:00:49.028974 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\": container with ID starting with 914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2 not found: ID does not exist" containerID="914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.028997 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2"} err="failed to get container status \"914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\": rpc error: code = NotFound desc = could not find container \"914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2\": container with ID starting with 914338d2cab2224ef1d4ad069314aa863435e2543718e3feb2b2f0db22d258e2 not found: ID does not exist" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.029011 4910 scope.go:117] "RemoveContainer" containerID="19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66" Feb 26 22:00:49 crc kubenswrapper[4910]: E0226 22:00:49.029313 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\": container with ID starting with 19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66 not found: ID does not exist" containerID="19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.029337 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66"} err="failed to get container status \"19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\": rpc error: code = NotFound desc = could not find container \"19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66\": container with ID starting with 19ec325997507309ab9dae91f2740c1aa28e923add7aa7b4e45d70940eb5ca66 not found: ID does not exist" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.029353 4910 scope.go:117] "RemoveContainer" containerID="14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097" Feb 26 22:00:49 crc kubenswrapper[4910]: E0226 22:00:49.030011 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\": container with ID starting with 14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097 not found: ID does not exist" containerID="14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.030043 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097"} err="failed to get container status \"14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\": rpc error: code = NotFound desc = could not find container \"14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097\": container with ID starting with 14e2142894da465f536b36b8c8a996f7a28a00b8669a26270157e38c6b3a8097 not found: ID does not exist" Feb 26 22:00:49 crc kubenswrapper[4910]: E0226 22:00:49.083085 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="800ms" Feb 26 22:00:49 crc kubenswrapper[4910]: E0226 22:00:49.884661 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="1.6s" Feb 26 22:00:49 crc kubenswrapper[4910]: I0226 22:00:49.921295 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 26 22:00:51 crc kubenswrapper[4910]: E0226 22:00:51.486064 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="3.2s" Feb 26 22:00:54 crc kubenswrapper[4910]: E0226 22:00:54.687585 4910 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="6.4s" Feb 26 22:00:54 crc kubenswrapper[4910]: E0226 22:00:54.911977 4910 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" volumeName="registry-storage" Feb 26 22:00:55 crc kubenswrapper[4910]: I0226 22:00:55.906937 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:55 crc kubenswrapper[4910]: I0226 22:00:55.907997 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:56 crc kubenswrapper[4910]: E0226 22:00:56.143868 4910 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897eae4963d2056 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 22:00:45.887610966 +0000 UTC m=+330.967101507,LastTimestamp:2026-02-26 22:00:45.887610966 +0000 UTC m=+330.967101507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 22:00:57 crc kubenswrapper[4910]: I0226 22:00:57.986689 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 22:00:57 crc kubenswrapper[4910]: I0226 22:00:57.988226 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 22:00:57 crc kubenswrapper[4910]: I0226 22:00:57.988265 4910 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17" exitCode=1 Feb 26 22:00:57 crc kubenswrapper[4910]: I0226 22:00:57.988297 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17"} Feb 26 22:00:57 crc kubenswrapper[4910]: I0226 22:00:57.988788 4910 scope.go:117] "RemoveContainer" containerID="d72b52b910bf5e9a00497ed002d962476646d86358d4316303a2442593e14b17" Feb 26 22:00:57 crc kubenswrapper[4910]: I0226 22:00:57.989168 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:57 crc kubenswrapper[4910]: I0226 22:00:57.989971 4910 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:57 crc kubenswrapper[4910]: I0226 22:00:57.990468 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:58 crc kubenswrapper[4910]: I0226 22:00:58.439512 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 22:00:58 crc kubenswrapper[4910]: I0226 22:00:58.900503 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:58 crc kubenswrapper[4910]: I0226 22:00:58.901691 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:58 crc kubenswrapper[4910]: I0226 22:00:58.902229 4910 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:58 crc kubenswrapper[4910]: I0226 22:00:58.902761 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:58 crc kubenswrapper[4910]: I0226 22:00:58.922035 4910 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:00:58 crc kubenswrapper[4910]: I0226 22:00:58.922075 4910 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:00:58 crc kubenswrapper[4910]: E0226 22:00:58.922387 4910 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:58 crc kubenswrapper[4910]: I0226 22:00:58.922903 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:00:58 crc kubenswrapper[4910]: W0226 22:00:58.951204 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-77e23ee3d3a6e91c32f08ed432f12d4cc883fe65ad36315904d83ddcdfa8dbfe WatchSource:0}: Error finding container 77e23ee3d3a6e91c32f08ed432f12d4cc883fe65ad36315904d83ddcdfa8dbfe: Status 404 returned error can't find the container with id 77e23ee3d3a6e91c32f08ed432f12d4cc883fe65ad36315904d83ddcdfa8dbfe Feb 26 22:00:58 crc kubenswrapper[4910]: I0226 22:00:58.998900 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 22:00:59 crc kubenswrapper[4910]: I0226 22:00:59.001127 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 22:00:59 crc kubenswrapper[4910]: I0226 22:00:59.001418 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"907a8cb6464543b12a1917512e47e46ceb3accbfd529e6a2250c89789c1ae08a"} Feb 26 22:00:59 crc kubenswrapper[4910]: I0226 22:00:59.003983 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"77e23ee3d3a6e91c32f08ed432f12d4cc883fe65ad36315904d83ddcdfa8dbfe"} Feb 26 22:00:59 crc kubenswrapper[4910]: I0226 22:00:59.008856 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:59 crc kubenswrapper[4910]: I0226 22:00:59.009647 4910 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:00:59 crc kubenswrapper[4910]: I0226 22:00:59.010124 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:01:00 crc kubenswrapper[4910]: I0226 22:01:00.009460 4910 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4d0b20c18a0c5315bb2ab36f221b68ac9e9215b0fcaa6b8b2118ceb448e8a720" exitCode=0 Feb 26 22:01:00 crc kubenswrapper[4910]: I0226 22:01:00.009521 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4d0b20c18a0c5315bb2ab36f221b68ac9e9215b0fcaa6b8b2118ceb448e8a720"} Feb 26 22:01:00 crc kubenswrapper[4910]: I0226 22:01:00.010041 4910 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:01:00 crc kubenswrapper[4910]: I0226 22:01:00.010126 4910 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:01:00 crc kubenswrapper[4910]: I0226 22:01:00.010606 4910 status_manager.go:851] "Failed to get status for pod" podUID="ddb0f143-e336-4b54-a769-47390935e034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:01:00 crc kubenswrapper[4910]: E0226 22:01:00.010772 4910 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:01:00 crc kubenswrapper[4910]: I0226 22:01:00.012113 4910 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:01:00 crc kubenswrapper[4910]: I0226 22:01:00.012652 4910 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Feb 26 22:01:00 crc kubenswrapper[4910]: I0226 22:01:00.903964 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 22:01:00 crc kubenswrapper[4910]: I0226 22:01:00.904291 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 22:01:01 crc kubenswrapper[4910]: I0226 22:01:01.005608 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 22:01:01 crc kubenswrapper[4910]: I0226 22:01:01.005664 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 22:01:01 crc kubenswrapper[4910]: I0226 22:01:01.018257 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ea4707d3723663af0dfca0766d31cfd4ec197bc883145b71e4e969e943e10c5a"} Feb 26 22:01:01 crc kubenswrapper[4910]: I0226 22:01:01.018311 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"087b4a3cca00ad2a3a9321d41cbccf26e1c92e3fc882462f0d1a9853d1f208b9"} Feb 26 22:01:01 crc kubenswrapper[4910]: I0226 22:01:01.018323 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"415e6b2e93ceff28e28220423dde3b484d58eed44a2749605608715b779a013d"} Feb 26 22:01:01 crc kubenswrapper[4910]: E0226 22:01:01.904368 4910 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 22:01:01 crc kubenswrapper[4910]: E0226 22:01:01.904463 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 22:03:03.904438832 +0000 UTC m=+468.983929363 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Feb 26 22:01:01 crc kubenswrapper[4910]: E0226 22:01:01.904471 4910 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:01 crc kubenswrapper[4910]: E0226 22:01:01.904554 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 22:03:03.904530194 +0000 UTC m=+468.984020725 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:02 crc kubenswrapper[4910]: E0226 22:01:02.006700 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:02 crc kubenswrapper[4910]: E0226 22:01:02.007028 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:02 crc kubenswrapper[4910]: I0226 22:01:02.037216 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"135da4cbd98f32f7f8b34324ff37c77520c01e41a21af55c4be84ad6d6143045"} Feb 26 22:01:02 crc kubenswrapper[4910]: I0226 22:01:02.037265 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f66b48a903b69518ed851158c9f930686f2edbe5256713b4333cc11153ca8035"} Feb 26 22:01:02 crc kubenswrapper[4910]: I0226 22:01:02.037544 4910 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:01:02 crc kubenswrapper[4910]: I0226 22:01:02.037561 4910 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:01:02 crc kubenswrapper[4910]: I0226 22:01:02.037817 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:01:03 crc kubenswrapper[4910]: E0226 22:01:03.007316 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:03 crc kubenswrapper[4910]: E0226 22:01:03.007808 4910 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:03 crc kubenswrapper[4910]: E0226 22:01:03.007906 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 22:03:05.007874376 +0000 UTC m=+470.087364957 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:03 crc kubenswrapper[4910]: E0226 22:01:03.007329 4910 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:03 crc kubenswrapper[4910]: E0226 22:01:03.007972 4910 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:03 crc kubenswrapper[4910]: E0226 22:01:03.008080 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 22:03:05.008049001 +0000 UTC m=+470.087539582 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Feb 26 22:01:03 crc kubenswrapper[4910]: I0226 22:01:03.923517 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:01:03 crc kubenswrapper[4910]: I0226 22:01:03.923579 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:01:03 crc kubenswrapper[4910]: I0226 22:01:03.932090 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.024295 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.033091 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.054286 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.278935 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" podUID="0b3633c0-54b9-486c-a14b-99b6e5c04765" containerName="oauth-openshift" containerID="cri-o://3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a" gracePeriod=15 Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.780612 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.968965 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-policies\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969040 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-error\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969135 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-session\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969205 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-idp-0-file-data\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969249 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-service-ca\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969298 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-serving-cert\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969327 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-dir\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969389 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-provider-selection\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969429 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-ocp-branding-template\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969465 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-cliconfig\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969527 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-login\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969592 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt4wj\" (UniqueName: \"kubernetes.io/projected/0b3633c0-54b9-486c-a14b-99b6e5c04765-kube-api-access-gt4wj\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969623 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-router-certs\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.969656 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-trusted-ca-bundle\") pod \"0b3633c0-54b9-486c-a14b-99b6e5c04765\" (UID: \"0b3633c0-54b9-486c-a14b-99b6e5c04765\") " Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.970200 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.970261 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.970247 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.971445 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.971518 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.977997 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.978068 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3633c0-54b9-486c-a14b-99b6e5c04765-kube-api-access-gt4wj" (OuterVolumeSpecName: "kube-api-access-gt4wj") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "kube-api-access-gt4wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.978531 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.978975 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.979927 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.981092 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.982128 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.985379 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:05 crc kubenswrapper[4910]: I0226 22:01:05.988987 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0b3633c0-54b9-486c-a14b-99b6e5c04765" (UID: "0b3633c0-54b9-486c-a14b-99b6e5c04765"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.063045 4910 generic.go:334] "Generic (PLEG): container finished" podID="0b3633c0-54b9-486c-a14b-99b6e5c04765" containerID="3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a" exitCode=0 Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.063194 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.063160 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" event={"ID":"0b3633c0-54b9-486c-a14b-99b6e5c04765","Type":"ContainerDied","Data":"3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a"} Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.063286 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dgm55" event={"ID":"0b3633c0-54b9-486c-a14b-99b6e5c04765","Type":"ContainerDied","Data":"00aef2f644930da530bf887eee56cf21dd9b85724bd6b95272478579d841e225"} Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.063332 4910 scope.go:117] "RemoveContainer" containerID="3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071504 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071571 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071602 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071629 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt4wj\" (UniqueName: \"kubernetes.io/projected/0b3633c0-54b9-486c-a14b-99b6e5c04765-kube-api-access-gt4wj\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071656 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071685 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071710 4910 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071740 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071764 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071790 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071814 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071844 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071870 4910 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b3633c0-54b9-486c-a14b-99b6e5c04765-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.071895 4910 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b3633c0-54b9-486c-a14b-99b6e5c04765-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.098818 4910 scope.go:117] "RemoveContainer" containerID="3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a" Feb 26 22:01:06 crc kubenswrapper[4910]: E0226 22:01:06.099282 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a\": container with ID starting with 3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a not found: ID does not exist" containerID="3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.099700 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a"} err="failed to get container status \"3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a\": rpc error: code = NotFound desc = could not find container \"3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a\": container with ID starting with 3de63fc90d30cc78ff3406e01601af8651f8988b74f6ba8855db87212166698a not found: ID does not exist" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.687113 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.692619 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 22:01:06 crc kubenswrapper[4910]: I0226 22:01:06.692652 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 22:01:07 crc kubenswrapper[4910]: I0226 22:01:07.010402 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 22:01:07 crc kubenswrapper[4910]: I0226 22:01:07.051137 4910 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:01:07 crc kubenswrapper[4910]: I0226 22:01:07.070954 4910 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:01:07 crc kubenswrapper[4910]: I0226 22:01:07.070985 4910 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:01:07 crc kubenswrapper[4910]: I0226 22:01:07.076536 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:01:07 crc kubenswrapper[4910]: I0226 22:01:07.156628 4910 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9d2f1be7-763a-423a-a129-b91f9cfad700" Feb 26 22:01:07 crc kubenswrapper[4910]: E0226 22:01:07.563658 4910 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Feb 26 22:01:07 crc kubenswrapper[4910]: E0226 22:01:07.752155 4910 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Feb 26 22:01:08 crc kubenswrapper[4910]: I0226 22:01:08.081678 4910 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:01:08 crc kubenswrapper[4910]: I0226 22:01:08.081723 4910 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ed1156e-3afd-4214-8184-33b187a1b2a8" Feb 26 22:01:08 crc kubenswrapper[4910]: I0226 22:01:08.086407 4910 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9d2f1be7-763a-423a-a129-b91f9cfad700" Feb 26 22:01:08 crc kubenswrapper[4910]: I0226 22:01:08.446089 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 22:01:15 crc kubenswrapper[4910]: E0226 22:01:15.930691 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 22:01:15 crc kubenswrapper[4910]: E0226 22:01:15.948227 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 22:01:15 crc kubenswrapper[4910]: E0226 22:01:15.959736 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 22:01:16 crc kubenswrapper[4910]: I0226 22:01:16.829089 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 22:01:17 crc kubenswrapper[4910]: I0226 22:01:17.277875 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 22:01:17 crc kubenswrapper[4910]: I0226 22:01:17.464387 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 22:01:17 crc kubenswrapper[4910]: I0226 22:01:17.649563 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.101769 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.230561 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.282471 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.332565 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.490283 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.642590 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.692975 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.708434 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.728919 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.771381 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.780796 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 22:01:18 crc kubenswrapper[4910]: I0226 22:01:18.989218 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.021479 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.120387 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.239150 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.347828 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.372470 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.412550 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.554338 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.569020 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.626111 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 22:01:19 crc kubenswrapper[4910]: I0226 22:01:19.905948 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.015367 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.128878 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.130513 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.146666 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.220730 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.323704 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.349195 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.422222 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.443493 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.647325 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.648479 4910 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.651568 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=35.65154552 podStartE2EDuration="35.65154552s" podCreationTimestamp="2026-02-26 22:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:01:07.086287887 +0000 UTC m=+352.165778438" watchObservedRunningTime="2026-02-26 22:01:20.65154552 +0000 UTC m=+365.731036101" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.657597 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dgm55","openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.657684 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.668703 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.669341 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.687077 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.687054307 podStartE2EDuration="13.687054307s" podCreationTimestamp="2026-02-26 22:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:01:20.680734203 +0000 UTC m=+365.760224774" watchObservedRunningTime="2026-02-26 22:01:20.687054307 +0000 UTC m=+365.766544868" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.689016 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.781881 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.788827 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.819677 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 22:01:20 crc kubenswrapper[4910]: I0226 22:01:20.822207 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.003329 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.141702 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.224287 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.268218 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.269773 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.534142 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.568580 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.599226 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.633017 4910 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.642768 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.651703 4910 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.664025 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.667653 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.684379 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.688286 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.911636 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.912672 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3633c0-54b9-486c-a14b-99b6e5c04765" path="/var/lib/kubelet/pods/0b3633c0-54b9-486c-a14b-99b6e5c04765/volumes" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.955191 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 22:01:21 crc kubenswrapper[4910]: I0226 22:01:21.997524 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.052654 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.075648 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.112190 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.239279 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.267325 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.306352 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.333584 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.336679 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.386433 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.477587 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.491682 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.509777 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.510690 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.519927 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.582241 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.625020 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.643027 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.766843 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.844704 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.872251 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.877821 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.926938 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.955647 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 22:01:22 crc kubenswrapper[4910]: I0226 22:01:22.958212 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.008965 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.020878 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.087433 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.099775 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.174153 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.182817 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.274398 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.297047 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.352452 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.416912 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.532569 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.543721 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.554219 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.735723 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.778558 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.882579 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.882899 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.935545 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 22:01:23 crc kubenswrapper[4910]: I0226 22:01:23.952196 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.138252 4910 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.183973 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.207281 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.248269 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-74df89945f-lzjkm"] Feb 26 22:01:24 crc kubenswrapper[4910]: E0226 22:01:24.248605 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3633c0-54b9-486c-a14b-99b6e5c04765" containerName="oauth-openshift" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.248635 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3633c0-54b9-486c-a14b-99b6e5c04765" containerName="oauth-openshift" Feb 26 22:01:24 crc kubenswrapper[4910]: E0226 22:01:24.248660 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb0f143-e336-4b54-a769-47390935e034" containerName="installer" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.248673 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb0f143-e336-4b54-a769-47390935e034" containerName="installer" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.248848 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3633c0-54b9-486c-a14b-99b6e5c04765" containerName="oauth-openshift" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.248889 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb0f143-e336-4b54-a769-47390935e034" containerName="installer" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.250323 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.257608 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.258050 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.258399 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.259464 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.259669 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.259736 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.259979 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.260513 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.261217 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.261562 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.261724 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.261772 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.263067 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.272482 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.278774 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74df89945f-lzjkm"] Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.279287 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.287259 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.338269 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376087 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376181 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376227 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376262 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-audit-policies\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376334 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376371 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376399 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e885eeb-c983-47d5-8e67-53be03b78dcb-audit-dir\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376428 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-session\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376458 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt8c8\" (UniqueName: \"kubernetes.io/projected/3e885eeb-c983-47d5-8e67-53be03b78dcb-kube-api-access-kt8c8\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376510 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-template-error\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376544 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376584 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-template-login\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376615 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.376679 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478396 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478475 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e885eeb-c983-47d5-8e67-53be03b78dcb-audit-dir\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478508 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-session\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478543 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt8c8\" (UniqueName: \"kubernetes.io/projected/3e885eeb-c983-47d5-8e67-53be03b78dcb-kube-api-access-kt8c8\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478608 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-template-error\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478640 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478685 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-template-login\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478719 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478797 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478842 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478882 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478920 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.478955 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-audit-policies\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.479015 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.480282 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.480285 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e885eeb-c983-47d5-8e67-53be03b78dcb-audit-dir\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.481706 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.482892 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-audit-policies\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.483009 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.492571 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-template-error\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.492598 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.492598 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-session\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.492843 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.492878 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.493346 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.503260 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.505494 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3e885eeb-c983-47d5-8e67-53be03b78dcb-v4-0-config-user-template-login\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.509028 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt8c8\" (UniqueName: \"kubernetes.io/projected/3e885eeb-c983-47d5-8e67-53be03b78dcb-kube-api-access-kt8c8\") pod \"oauth-openshift-74df89945f-lzjkm\" (UID: \"3e885eeb-c983-47d5-8e67-53be03b78dcb\") " pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.534497 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.547304 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.576289 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.585493 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.591940 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.614498 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.700538 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.773475 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.823588 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74df89945f-lzjkm"] Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.856792 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.889078 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.917748 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 22:01:24 crc kubenswrapper[4910]: I0226 22:01:24.926362 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.030517 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.145799 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.191916 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.198619 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" event={"ID":"3e885eeb-c983-47d5-8e67-53be03b78dcb","Type":"ContainerStarted","Data":"c9fb1d68de6c7d9abfc0be0116bfe8ffb6ba81d126d9f6b189cf567d718bd6eb"} Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.198695 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" event={"ID":"3e885eeb-c983-47d5-8e67-53be03b78dcb","Type":"ContainerStarted","Data":"d849ecb7e8b0035292b11f350263aa246d7dc6371609ae52192742b42c6073dc"} Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.199082 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.233893 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" podStartSLOduration=45.233865769 podStartE2EDuration="45.233865769s" podCreationTimestamp="2026-02-26 22:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:01:25.229992632 +0000 UTC m=+370.309483253" watchObservedRunningTime="2026-02-26 22:01:25.233865769 +0000 UTC m=+370.313356350" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.445801 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.478909 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.551300 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.559612 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.697967 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.738200 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.745132 4910 patch_prober.go:28] interesting pod/oauth-openshift-74df89945f-lzjkm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": read tcp 10.217.0.2:42112->10.217.0.68:6443: read: connection reset by peer" start-of-body= Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.745183 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" podUID="3e885eeb-c983-47d5-8e67-53be03b78dcb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": read tcp 10.217.0.2:42112->10.217.0.68:6443: read: connection reset by peer" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.763871 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.858434 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.899254 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 22:01:25 crc kubenswrapper[4910]: I0226 22:01:25.983121 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.001977 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.078980 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.201859 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.205695 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-74df89945f-lzjkm_3e885eeb-c983-47d5-8e67-53be03b78dcb/oauth-openshift/0.log" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.205775 4910 generic.go:334] "Generic (PLEG): container finished" podID="3e885eeb-c983-47d5-8e67-53be03b78dcb" containerID="c9fb1d68de6c7d9abfc0be0116bfe8ffb6ba81d126d9f6b189cf567d718bd6eb" exitCode=255 Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.205816 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" event={"ID":"3e885eeb-c983-47d5-8e67-53be03b78dcb","Type":"ContainerDied","Data":"c9fb1d68de6c7d9abfc0be0116bfe8ffb6ba81d126d9f6b189cf567d718bd6eb"} Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.206345 4910 scope.go:117] "RemoveContainer" containerID="c9fb1d68de6c7d9abfc0be0116bfe8ffb6ba81d126d9f6b189cf567d718bd6eb" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.310764 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.331185 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.355850 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.376903 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.475639 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.555701 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.615495 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.618687 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.640995 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.645787 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.658597 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.692846 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.720654 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.748756 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.749536 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.786813 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 22:01:26 crc kubenswrapper[4910]: I0226 22:01:26.935652 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.035898 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.080621 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.200342 4910 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.224127 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-74df89945f-lzjkm_3e885eeb-c983-47d5-8e67-53be03b78dcb/oauth-openshift/1.log" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.224660 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-74df89945f-lzjkm_3e885eeb-c983-47d5-8e67-53be03b78dcb/oauth-openshift/0.log" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.224709 4910 generic.go:334] "Generic (PLEG): container finished" podID="3e885eeb-c983-47d5-8e67-53be03b78dcb" containerID="d7c38dc9b79d192816da218f5657469421b2b8b0a48a391c9d1271e1f577b2e8" exitCode=255 Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.224741 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" event={"ID":"3e885eeb-c983-47d5-8e67-53be03b78dcb","Type":"ContainerDied","Data":"d7c38dc9b79d192816da218f5657469421b2b8b0a48a391c9d1271e1f577b2e8"} Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.224782 4910 scope.go:117] "RemoveContainer" containerID="c9fb1d68de6c7d9abfc0be0116bfe8ffb6ba81d126d9f6b189cf567d718bd6eb" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.225265 4910 scope.go:117] "RemoveContainer" containerID="d7c38dc9b79d192816da218f5657469421b2b8b0a48a391c9d1271e1f577b2e8" Feb 26 22:01:27 crc kubenswrapper[4910]: E0226 22:01:27.225660 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-74df89945f-lzjkm_openshift-authentication(3e885eeb-c983-47d5-8e67-53be03b78dcb)\"" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" podUID="3e885eeb-c983-47d5-8e67-53be03b78dcb" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.289945 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.300720 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.339454 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.379501 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.506405 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.589321 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.686004 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.736130 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.793104 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.845868 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.894964 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 22:01:27 crc kubenswrapper[4910]: I0226 22:01:27.995738 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.010041 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.019096 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.027318 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.044759 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.052303 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.058486 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.061043 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.204381 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.233874 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-74df89945f-lzjkm_3e885eeb-c983-47d5-8e67-53be03b78dcb/oauth-openshift/1.log" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.234642 4910 scope.go:117] "RemoveContainer" containerID="d7c38dc9b79d192816da218f5657469421b2b8b0a48a391c9d1271e1f577b2e8" Feb 26 22:01:28 crc kubenswrapper[4910]: E0226 22:01:28.234880 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-74df89945f-lzjkm_openshift-authentication(3e885eeb-c983-47d5-8e67-53be03b78dcb)\"" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" podUID="3e885eeb-c983-47d5-8e67-53be03b78dcb" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.245428 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.245972 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.289010 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.290330 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.326932 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.354352 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.356992 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.370615 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.426290 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.462747 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.492770 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.494357 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.526108 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.565619 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.765889 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.817616 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.842976 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.859724 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.865398 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.868931 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 22:01:28 crc kubenswrapper[4910]: I0226 22:01:28.955704 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.065572 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.206519 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.265479 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.353513 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.379678 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.634014 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.634029 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.634592 4910 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.657610 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.682714 4910 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.682943 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b6bc19b116902a7f9f3a2149637da68a961e4f49fe980796c3aea82835548a48" gracePeriod=5 Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.855501 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.858695 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.887083 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.888979 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 22:01:29 crc kubenswrapper[4910]: I0226 22:01:29.948965 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.272694 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.627836 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.780300 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.854517 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.865949 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.900209 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.900496 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.901243 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.901641 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 22:01:30 crc kubenswrapper[4910]: I0226 22:01:30.969449 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 22:01:31 crc kubenswrapper[4910]: I0226 22:01:31.149448 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 22:01:31 crc kubenswrapper[4910]: I0226 22:01:31.194079 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 22:01:31 crc kubenswrapper[4910]: I0226 22:01:31.868792 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 22:01:31 crc kubenswrapper[4910]: I0226 22:01:31.924581 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 22:01:32 crc kubenswrapper[4910]: I0226 22:01:32.141769 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 22:01:32 crc kubenswrapper[4910]: I0226 22:01:32.208109 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 22:01:32 crc kubenswrapper[4910]: I0226 22:01:32.301791 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 22:01:32 crc kubenswrapper[4910]: I0226 22:01:32.341705 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 22:01:32 crc kubenswrapper[4910]: I0226 22:01:32.401370 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 22:01:32 crc kubenswrapper[4910]: I0226 22:01:32.626407 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 22:01:32 crc kubenswrapper[4910]: I0226 22:01:32.848879 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 22:01:32 crc kubenswrapper[4910]: I0226 22:01:32.893753 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 22:01:33 crc kubenswrapper[4910]: I0226 22:01:33.245274 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 22:01:33 crc kubenswrapper[4910]: I0226 22:01:33.388194 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 22:01:34 crc kubenswrapper[4910]: I0226 22:01:34.593176 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:34 crc kubenswrapper[4910]: I0226 22:01:34.593746 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:34 crc kubenswrapper[4910]: I0226 22:01:34.594457 4910 scope.go:117] "RemoveContainer" containerID="d7c38dc9b79d192816da218f5657469421b2b8b0a48a391c9d1271e1f577b2e8" Feb 26 22:01:34 crc kubenswrapper[4910]: E0226 22:01:34.594691 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-74df89945f-lzjkm_openshift-authentication(3e885eeb-c983-47d5-8e67-53be03b78dcb)\"" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" podUID="3e885eeb-c983-47d5-8e67-53be03b78dcb" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.281519 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.281862 4910 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b6bc19b116902a7f9f3a2149637da68a961e4f49fe980796c3aea82835548a48" exitCode=137 Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.282496 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4ee58c3bc0765e5ae6ea137afb42e5f82b3f9f9b7afeeb755f5574bf87adc84" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.282612 4910 scope.go:117] "RemoveContainer" containerID="d7c38dc9b79d192816da218f5657469421b2b8b0a48a391c9d1271e1f577b2e8" Feb 26 22:01:35 crc kubenswrapper[4910]: E0226 22:01:35.283683 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-74df89945f-lzjkm_openshift-authentication(3e885eeb-c983-47d5-8e67-53be03b78dcb)\"" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" podUID="3e885eeb-c983-47d5-8e67-53be03b78dcb" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.285104 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.285251 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.316773 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.316832 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.316858 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.316914 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.316938 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.329013 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.329344 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.329879 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.329999 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.336641 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.373387 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.418313 4910 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.418345 4910 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.418357 4910 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.418366 4910 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.418374 4910 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.910030 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.910529 4910 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.926211 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.926265 4910 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="30676aff-d1e2-4586-b860-03ec50fbc952" Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.933925 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 22:01:35 crc kubenswrapper[4910]: I0226 22:01:35.933978 4910 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="30676aff-d1e2-4586-b860-03ec50fbc952" Feb 26 22:01:36 crc kubenswrapper[4910]: I0226 22:01:36.290031 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 22:01:46 crc kubenswrapper[4910]: I0226 22:01:46.402006 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c45c9bfb9-x48jj"] Feb 26 22:01:46 crc kubenswrapper[4910]: I0226 22:01:46.402919 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" podUID="1f794d15-85f3-4ba3-b722-5d84e523e33a" containerName="controller-manager" containerID="cri-o://60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6" gracePeriod=30 Feb 26 22:01:46 crc kubenswrapper[4910]: I0226 22:01:46.410085 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt"] Feb 26 22:01:46 crc kubenswrapper[4910]: I0226 22:01:46.410461 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" podUID="ff0bd3cc-237d-4953-9a02-8b479f59b01b" containerName="route-controller-manager" containerID="cri-o://fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0" gracePeriod=30 Feb 26 22:01:46 crc kubenswrapper[4910]: I0226 22:01:46.944438 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:01:46 crc kubenswrapper[4910]: I0226 22:01:46.950802 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.010454 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-config\") pod \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.010551 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-proxy-ca-bundles\") pod \"1f794d15-85f3-4ba3-b722-5d84e523e33a\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.010612 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp9dr\" (UniqueName: \"kubernetes.io/projected/1f794d15-85f3-4ba3-b722-5d84e523e33a-kube-api-access-lp9dr\") pod \"1f794d15-85f3-4ba3-b722-5d84e523e33a\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.010647 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-config\") pod \"1f794d15-85f3-4ba3-b722-5d84e523e33a\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.010687 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0bd3cc-237d-4953-9a02-8b479f59b01b-serving-cert\") pod \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.010731 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-client-ca\") pod \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.010774 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52mmw\" (UniqueName: \"kubernetes.io/projected/ff0bd3cc-237d-4953-9a02-8b479f59b01b-kube-api-access-52mmw\") pod \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\" (UID: \"ff0bd3cc-237d-4953-9a02-8b479f59b01b\") " Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.010806 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f794d15-85f3-4ba3-b722-5d84e523e33a-serving-cert\") pod \"1f794d15-85f3-4ba3-b722-5d84e523e33a\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.010829 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-client-ca\") pod \"1f794d15-85f3-4ba3-b722-5d84e523e33a\" (UID: \"1f794d15-85f3-4ba3-b722-5d84e523e33a\") " Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.013255 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1f794d15-85f3-4ba3-b722-5d84e523e33a" (UID: "1f794d15-85f3-4ba3-b722-5d84e523e33a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.013463 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f794d15-85f3-4ba3-b722-5d84e523e33a" (UID: "1f794d15-85f3-4ba3-b722-5d84e523e33a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.013547 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff0bd3cc-237d-4953-9a02-8b479f59b01b" (UID: "ff0bd3cc-237d-4953-9a02-8b479f59b01b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.013634 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-config" (OuterVolumeSpecName: "config") pod "ff0bd3cc-237d-4953-9a02-8b479f59b01b" (UID: "ff0bd3cc-237d-4953-9a02-8b479f59b01b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.013841 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-config" (OuterVolumeSpecName: "config") pod "1f794d15-85f3-4ba3-b722-5d84e523e33a" (UID: "1f794d15-85f3-4ba3-b722-5d84e523e33a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.021149 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f794d15-85f3-4ba3-b722-5d84e523e33a-kube-api-access-lp9dr" (OuterVolumeSpecName: "kube-api-access-lp9dr") pod "1f794d15-85f3-4ba3-b722-5d84e523e33a" (UID: "1f794d15-85f3-4ba3-b722-5d84e523e33a"). InnerVolumeSpecName "kube-api-access-lp9dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.021696 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f794d15-85f3-4ba3-b722-5d84e523e33a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f794d15-85f3-4ba3-b722-5d84e523e33a" (UID: "1f794d15-85f3-4ba3-b722-5d84e523e33a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.026431 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0bd3cc-237d-4953-9a02-8b479f59b01b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff0bd3cc-237d-4953-9a02-8b479f59b01b" (UID: "ff0bd3cc-237d-4953-9a02-8b479f59b01b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.027258 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0bd3cc-237d-4953-9a02-8b479f59b01b-kube-api-access-52mmw" (OuterVolumeSpecName: "kube-api-access-52mmw") pod "ff0bd3cc-237d-4953-9a02-8b479f59b01b" (UID: "ff0bd3cc-237d-4953-9a02-8b479f59b01b"). InnerVolumeSpecName "kube-api-access-52mmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.113423 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52mmw\" (UniqueName: \"kubernetes.io/projected/ff0bd3cc-237d-4953-9a02-8b479f59b01b-kube-api-access-52mmw\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.113774 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f794d15-85f3-4ba3-b722-5d84e523e33a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.113914 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.114111 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.114332 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.114534 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp9dr\" (UniqueName: \"kubernetes.io/projected/1f794d15-85f3-4ba3-b722-5d84e523e33a-kube-api-access-lp9dr\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.114707 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f794d15-85f3-4ba3-b722-5d84e523e33a-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.114845 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0bd3cc-237d-4953-9a02-8b479f59b01b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.114973 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0bd3cc-237d-4953-9a02-8b479f59b01b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.366080 4910 generic.go:334] "Generic (PLEG): container finished" podID="1f794d15-85f3-4ba3-b722-5d84e523e33a" containerID="60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6" exitCode=0 Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.366251 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.366285 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" event={"ID":"1f794d15-85f3-4ba3-b722-5d84e523e33a","Type":"ContainerDied","Data":"60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6"} Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.366331 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c45c9bfb9-x48jj" event={"ID":"1f794d15-85f3-4ba3-b722-5d84e523e33a","Type":"ContainerDied","Data":"0e06c74ea37ae970a1ec34e3c89165a1d4603d7407f38f156638af3adb552e48"} Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.366364 4910 scope.go:117] "RemoveContainer" containerID="60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.369673 4910 generic.go:334] "Generic (PLEG): container finished" podID="ff0bd3cc-237d-4953-9a02-8b479f59b01b" containerID="fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0" exitCode=0 Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.369732 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" event={"ID":"ff0bd3cc-237d-4953-9a02-8b479f59b01b","Type":"ContainerDied","Data":"fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0"} Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.369768 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" event={"ID":"ff0bd3cc-237d-4953-9a02-8b479f59b01b","Type":"ContainerDied","Data":"d01bdb6bdae5a6c110d3944472573e7970ec5fed5faa45170ad68bef7bcf39e1"} Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.370230 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.393477 4910 scope.go:117] "RemoveContainer" containerID="60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6" Feb 26 22:01:47 crc kubenswrapper[4910]: E0226 22:01:47.394361 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6\": container with ID starting with 60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6 not found: ID does not exist" containerID="60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.394451 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6"} err="failed to get container status \"60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6\": rpc error: code = NotFound desc = could not find container \"60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6\": container with ID starting with 60bfd233ca0e327853a836f90e3ad27cde708862f407927b398601e2ef1ed5d6 not found: ID does not exist" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.394522 4910 scope.go:117] "RemoveContainer" containerID="fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.421352 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c45c9bfb9-x48jj"] Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.426771 4910 scope.go:117] "RemoveContainer" containerID="fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0" Feb 26 22:01:47 crc kubenswrapper[4910]: E0226 22:01:47.427226 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0\": container with ID starting with fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0 not found: ID does not exist" containerID="fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.427253 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0"} err="failed to get container status \"fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0\": rpc error: code = NotFound desc = could not find container \"fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0\": container with ID starting with fd2068e9a557867105ecc5cd6db11fb33db89afcf62c3536da302599119293d0 not found: ID does not exist" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.432276 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c45c9bfb9-x48jj"] Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.439455 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt"] Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.446408 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdb564b9d-cvkvt"] Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.911466 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f794d15-85f3-4ba3-b722-5d84e523e33a" path="/var/lib/kubelet/pods/1f794d15-85f3-4ba3-b722-5d84e523e33a/volumes" Feb 26 22:01:47 crc kubenswrapper[4910]: I0226 22:01:47.912521 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0bd3cc-237d-4953-9a02-8b479f59b01b" path="/var/lib/kubelet/pods/ff0bd3cc-237d-4953-9a02-8b479f59b01b/volumes" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.457497 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c9975c46f-xwgvz"] Feb 26 22:01:48 crc kubenswrapper[4910]: E0226 22:01:48.459331 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.459540 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 22:01:48 crc kubenswrapper[4910]: E0226 22:01:48.459769 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0bd3cc-237d-4953-9a02-8b479f59b01b" containerName="route-controller-manager" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.459955 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0bd3cc-237d-4953-9a02-8b479f59b01b" containerName="route-controller-manager" Feb 26 22:01:48 crc kubenswrapper[4910]: E0226 22:01:48.460217 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f794d15-85f3-4ba3-b722-5d84e523e33a" containerName="controller-manager" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.460454 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f794d15-85f3-4ba3-b722-5d84e523e33a" containerName="controller-manager" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.460915 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f794d15-85f3-4ba3-b722-5d84e523e33a" containerName="controller-manager" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.461125 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0bd3cc-237d-4953-9a02-8b479f59b01b" containerName="route-controller-manager" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.461391 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.462152 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.469621 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.469917 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.469920 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.470483 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.470927 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.473002 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh"] Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.473314 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.473908 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.477113 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.477766 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.478047 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.479221 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.479230 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.479974 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.481840 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.483070 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh"] Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.496354 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c9975c46f-xwgvz"] Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.537940 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-serving-cert\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.538019 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-client-ca\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.538095 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-config\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.538131 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-proxy-ca-bundles\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.538236 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64nq7\" (UniqueName: \"kubernetes.io/projected/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-kube-api-access-64nq7\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.538269 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-client-ca\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.538345 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96af1e8d-47e3-4e83-b034-cba3a91cbaec-serving-cert\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.538406 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnl62\" (UniqueName: \"kubernetes.io/projected/96af1e8d-47e3-4e83-b034-cba3a91cbaec-kube-api-access-rnl62\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.538443 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-config\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.639375 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64nq7\" (UniqueName: \"kubernetes.io/projected/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-kube-api-access-64nq7\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.639435 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-client-ca\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.639521 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96af1e8d-47e3-4e83-b034-cba3a91cbaec-serving-cert\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.639560 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnl62\" (UniqueName: \"kubernetes.io/projected/96af1e8d-47e3-4e83-b034-cba3a91cbaec-kube-api-access-rnl62\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.639591 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-config\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.639672 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-serving-cert\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.639704 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-client-ca\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.639747 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-config\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.639807 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-proxy-ca-bundles\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.642025 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-proxy-ca-bundles\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.642642 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-config\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.642854 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-config\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.643202 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-client-ca\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.643226 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-client-ca\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.646433 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-serving-cert\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.650120 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96af1e8d-47e3-4e83-b034-cba3a91cbaec-serving-cert\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.672301 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64nq7\" (UniqueName: \"kubernetes.io/projected/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-kube-api-access-64nq7\") pod \"controller-manager-5c9975c46f-xwgvz\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.675311 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnl62\" (UniqueName: \"kubernetes.io/projected/96af1e8d-47e3-4e83-b034-cba3a91cbaec-kube-api-access-rnl62\") pod \"route-controller-manager-9f4fd4c8-xvjvh\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.803189 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.822493 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:48 crc kubenswrapper[4910]: I0226 22:01:48.902915 4910 scope.go:117] "RemoveContainer" containerID="d7c38dc9b79d192816da218f5657469421b2b8b0a48a391c9d1271e1f577b2e8" Feb 26 22:01:49 crc kubenswrapper[4910]: I0226 22:01:49.325470 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c9975c46f-xwgvz"] Feb 26 22:01:49 crc kubenswrapper[4910]: W0226 22:01:49.332762 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3eaa870_a2ff_456b_89f5_6c4ee1a130e6.slice/crio-50ec245674f0e95d60b6d78c3c596f95996db56eddc5e818465b8f14d35e53e1 WatchSource:0}: Error finding container 50ec245674f0e95d60b6d78c3c596f95996db56eddc5e818465b8f14d35e53e1: Status 404 returned error can't find the container with id 50ec245674f0e95d60b6d78c3c596f95996db56eddc5e818465b8f14d35e53e1 Feb 26 22:01:49 crc kubenswrapper[4910]: I0226 22:01:49.341365 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh"] Feb 26 22:01:49 crc kubenswrapper[4910]: W0226 22:01:49.347148 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96af1e8d_47e3_4e83_b034_cba3a91cbaec.slice/crio-6196b7d7f42c32a5e3f8e7e1d2342670355201db89dfc9c22f4347e6b555f3a6 WatchSource:0}: Error finding container 6196b7d7f42c32a5e3f8e7e1d2342670355201db89dfc9c22f4347e6b555f3a6: Status 404 returned error can't find the container with id 6196b7d7f42c32a5e3f8e7e1d2342670355201db89dfc9c22f4347e6b555f3a6 Feb 26 22:01:49 crc kubenswrapper[4910]: I0226 22:01:49.390937 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-74df89945f-lzjkm_3e885eeb-c983-47d5-8e67-53be03b78dcb/oauth-openshift/1.log" Feb 26 22:01:49 crc kubenswrapper[4910]: I0226 22:01:49.391356 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" event={"ID":"3e885eeb-c983-47d5-8e67-53be03b78dcb","Type":"ContainerStarted","Data":"c2309377803d76702965039e9782cbc182f700c8b787f02d82ecdea054e1dbe5"} Feb 26 22:01:49 crc kubenswrapper[4910]: I0226 22:01:49.391796 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:49 crc kubenswrapper[4910]: I0226 22:01:49.395206 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" event={"ID":"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6","Type":"ContainerStarted","Data":"50ec245674f0e95d60b6d78c3c596f95996db56eddc5e818465b8f14d35e53e1"} Feb 26 22:01:49 crc kubenswrapper[4910]: I0226 22:01:49.397340 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" event={"ID":"96af1e8d-47e3-4e83-b034-cba3a91cbaec","Type":"ContainerStarted","Data":"6196b7d7f42c32a5e3f8e7e1d2342670355201db89dfc9c22f4347e6b555f3a6"} Feb 26 22:01:49 crc kubenswrapper[4910]: I0226 22:01:49.705325 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-74df89945f-lzjkm" Feb 26 22:01:50 crc kubenswrapper[4910]: I0226 22:01:50.406260 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" event={"ID":"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6","Type":"ContainerStarted","Data":"6a320409fdcc6128c973fb43df165f467acccfe05aa51c3652298406a6f621f2"} Feb 26 22:01:50 crc kubenswrapper[4910]: I0226 22:01:50.408023 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:50 crc kubenswrapper[4910]: I0226 22:01:50.411384 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" event={"ID":"96af1e8d-47e3-4e83-b034-cba3a91cbaec","Type":"ContainerStarted","Data":"a03248db5af56753410baf1f1fffc160cd68a7777c03f360b03aeb351eff0a37"} Feb 26 22:01:50 crc kubenswrapper[4910]: I0226 22:01:50.411431 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:50 crc kubenswrapper[4910]: I0226 22:01:50.418825 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:01:50 crc kubenswrapper[4910]: I0226 22:01:50.432783 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:01:50 crc kubenswrapper[4910]: I0226 22:01:50.437002 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" podStartSLOduration=4.436976051 podStartE2EDuration="4.436976051s" podCreationTimestamp="2026-02-26 22:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:01:50.427405989 +0000 UTC m=+395.506896560" watchObservedRunningTime="2026-02-26 22:01:50.436976051 +0000 UTC m=+395.516466622" Feb 26 22:01:50 crc kubenswrapper[4910]: I0226 22:01:50.460281 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" podStartSLOduration=4.46025011 podStartE2EDuration="4.46025011s" podCreationTimestamp="2026-02-26 22:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:01:50.448626821 +0000 UTC m=+395.528117432" watchObservedRunningTime="2026-02-26 22:01:50.46025011 +0000 UTC m=+395.539740691" Feb 26 22:01:57 crc kubenswrapper[4910]: I0226 22:01:57.740312 4910 generic.go:334] "Generic (PLEG): container finished" podID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerID="fdb8b0a63263575a5c0facbd5e8ecd233b1e0fc8e32423534e4cbcc0f407e36e" exitCode=0 Feb 26 22:01:57 crc kubenswrapper[4910]: I0226 22:01:57.740372 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" event={"ID":"dbd9e8a9-2637-4ef5-b24e-fd2d08788451","Type":"ContainerDied","Data":"fdb8b0a63263575a5c0facbd5e8ecd233b1e0fc8e32423534e4cbcc0f407e36e"} Feb 26 22:01:57 crc kubenswrapper[4910]: I0226 22:01:57.741138 4910 scope.go:117] "RemoveContainer" containerID="fdb8b0a63263575a5c0facbd5e8ecd233b1e0fc8e32423534e4cbcc0f407e36e" Feb 26 22:01:58 crc kubenswrapper[4910]: I0226 22:01:58.751415 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" event={"ID":"dbd9e8a9-2637-4ef5-b24e-fd2d08788451","Type":"ContainerStarted","Data":"05ed42c662abf017f810acece6c18dd30dd7d4df6d1f899ae804a7be774b8a01"} Feb 26 22:01:58 crc kubenswrapper[4910]: I0226 22:01:58.751848 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 22:01:58 crc kubenswrapper[4910]: I0226 22:01:58.754647 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.202404 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535722-svlnd"] Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.203790 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535722-svlnd" Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.205964 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.208246 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.208329 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.215273 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535722-svlnd"] Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.312119 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft6v4\" (UniqueName: \"kubernetes.io/projected/d3baaa77-7e09-4a8f-916d-eb0050324f90-kube-api-access-ft6v4\") pod \"auto-csr-approver-29535722-svlnd\" (UID: \"d3baaa77-7e09-4a8f-916d-eb0050324f90\") " pod="openshift-infra/auto-csr-approver-29535722-svlnd" Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.413583 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft6v4\" (UniqueName: \"kubernetes.io/projected/d3baaa77-7e09-4a8f-916d-eb0050324f90-kube-api-access-ft6v4\") pod \"auto-csr-approver-29535722-svlnd\" (UID: \"d3baaa77-7e09-4a8f-916d-eb0050324f90\") " pod="openshift-infra/auto-csr-approver-29535722-svlnd" Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.434673 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft6v4\" (UniqueName: \"kubernetes.io/projected/d3baaa77-7e09-4a8f-916d-eb0050324f90-kube-api-access-ft6v4\") pod \"auto-csr-approver-29535722-svlnd\" (UID: \"d3baaa77-7e09-4a8f-916d-eb0050324f90\") " pod="openshift-infra/auto-csr-approver-29535722-svlnd" Feb 26 22:02:00 crc kubenswrapper[4910]: I0226 22:02:00.571665 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535722-svlnd" Feb 26 22:02:01 crc kubenswrapper[4910]: I0226 22:02:01.029847 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535722-svlnd"] Feb 26 22:02:01 crc kubenswrapper[4910]: W0226 22:02:01.040126 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3baaa77_7e09_4a8f_916d_eb0050324f90.slice/crio-1255c66d0352ece8c58b71d45095ee3096410b3adb65d2d0961193a03a098e30 WatchSource:0}: Error finding container 1255c66d0352ece8c58b71d45095ee3096410b3adb65d2d0961193a03a098e30: Status 404 returned error can't find the container with id 1255c66d0352ece8c58b71d45095ee3096410b3adb65d2d0961193a03a098e30 Feb 26 22:02:01 crc kubenswrapper[4910]: I0226 22:02:01.782687 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535722-svlnd" event={"ID":"d3baaa77-7e09-4a8f-916d-eb0050324f90","Type":"ContainerStarted","Data":"1255c66d0352ece8c58b71d45095ee3096410b3adb65d2d0961193a03a098e30"} Feb 26 22:02:03 crc kubenswrapper[4910]: I0226 22:02:03.807033 4910 generic.go:334] "Generic (PLEG): container finished" podID="d3baaa77-7e09-4a8f-916d-eb0050324f90" containerID="08d0726daa5a9f9f60fee46195a703703e49b83d5ee2989aab8d4f31a2e7be69" exitCode=0 Feb 26 22:02:03 crc kubenswrapper[4910]: I0226 22:02:03.807207 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535722-svlnd" event={"ID":"d3baaa77-7e09-4a8f-916d-eb0050324f90","Type":"ContainerDied","Data":"08d0726daa5a9f9f60fee46195a703703e49b83d5ee2989aab8d4f31a2e7be69"} Feb 26 22:02:05 crc kubenswrapper[4910]: I0226 22:02:05.174908 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535722-svlnd" Feb 26 22:02:05 crc kubenswrapper[4910]: I0226 22:02:05.296954 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft6v4\" (UniqueName: \"kubernetes.io/projected/d3baaa77-7e09-4a8f-916d-eb0050324f90-kube-api-access-ft6v4\") pod \"d3baaa77-7e09-4a8f-916d-eb0050324f90\" (UID: \"d3baaa77-7e09-4a8f-916d-eb0050324f90\") " Feb 26 22:02:05 crc kubenswrapper[4910]: I0226 22:02:05.303617 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3baaa77-7e09-4a8f-916d-eb0050324f90-kube-api-access-ft6v4" (OuterVolumeSpecName: "kube-api-access-ft6v4") pod "d3baaa77-7e09-4a8f-916d-eb0050324f90" (UID: "d3baaa77-7e09-4a8f-916d-eb0050324f90"). InnerVolumeSpecName "kube-api-access-ft6v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:02:05 crc kubenswrapper[4910]: I0226 22:02:05.398958 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft6v4\" (UniqueName: \"kubernetes.io/projected/d3baaa77-7e09-4a8f-916d-eb0050324f90-kube-api-access-ft6v4\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:05 crc kubenswrapper[4910]: I0226 22:02:05.820458 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535722-svlnd" event={"ID":"d3baaa77-7e09-4a8f-916d-eb0050324f90","Type":"ContainerDied","Data":"1255c66d0352ece8c58b71d45095ee3096410b3adb65d2d0961193a03a098e30"} Feb 26 22:02:05 crc kubenswrapper[4910]: I0226 22:02:05.820767 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1255c66d0352ece8c58b71d45095ee3096410b3adb65d2d0961193a03a098e30" Feb 26 22:02:05 crc kubenswrapper[4910]: I0226 22:02:05.820516 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535722-svlnd" Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.331216 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c9975c46f-xwgvz"] Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.331418 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" podUID="e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" containerName="controller-manager" containerID="cri-o://6a320409fdcc6128c973fb43df165f467acccfe05aa51c3652298406a6f621f2" gracePeriod=30 Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.341814 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh"] Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.341974 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" podUID="96af1e8d-47e3-4e83-b034-cba3a91cbaec" containerName="route-controller-manager" containerID="cri-o://a03248db5af56753410baf1f1fffc160cd68a7777c03f360b03aeb351eff0a37" gracePeriod=30 Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.826414 4910 generic.go:334] "Generic (PLEG): container finished" podID="e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" containerID="6a320409fdcc6128c973fb43df165f467acccfe05aa51c3652298406a6f621f2" exitCode=0 Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.826482 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" event={"ID":"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6","Type":"ContainerDied","Data":"6a320409fdcc6128c973fb43df165f467acccfe05aa51c3652298406a6f621f2"} Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.828262 4910 generic.go:334] "Generic (PLEG): container finished" podID="96af1e8d-47e3-4e83-b034-cba3a91cbaec" containerID="a03248db5af56753410baf1f1fffc160cd68a7777c03f360b03aeb351eff0a37" exitCode=0 Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.828314 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" event={"ID":"96af1e8d-47e3-4e83-b034-cba3a91cbaec","Type":"ContainerDied","Data":"a03248db5af56753410baf1f1fffc160cd68a7777c03f360b03aeb351eff0a37"} Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.890789 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:02:06 crc kubenswrapper[4910]: I0226 22:02:06.943299 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.016772 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-client-ca\") pod \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.016869 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-serving-cert\") pod \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.016926 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-proxy-ca-bundles\") pod \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.017020 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-config\") pod \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.017060 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-client-ca\") pod \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.017118 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64nq7\" (UniqueName: \"kubernetes.io/projected/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-kube-api-access-64nq7\") pod \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.017191 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-config\") pod \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\" (UID: \"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6\") " Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.017228 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnl62\" (UniqueName: \"kubernetes.io/projected/96af1e8d-47e3-4e83-b034-cba3a91cbaec-kube-api-access-rnl62\") pod \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.017288 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96af1e8d-47e3-4e83-b034-cba3a91cbaec-serving-cert\") pod \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\" (UID: \"96af1e8d-47e3-4e83-b034-cba3a91cbaec\") " Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.017696 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-config" (OuterVolumeSpecName: "config") pod "96af1e8d-47e3-4e83-b034-cba3a91cbaec" (UID: "96af1e8d-47e3-4e83-b034-cba3a91cbaec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.017706 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-client-ca" (OuterVolumeSpecName: "client-ca") pod "96af1e8d-47e3-4e83-b034-cba3a91cbaec" (UID: "96af1e8d-47e3-4e83-b034-cba3a91cbaec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.017813 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" (UID: "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.018302 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" (UID: "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.018434 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-config" (OuterVolumeSpecName: "config") pod "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" (UID: "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.021129 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" (UID: "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.021741 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-kube-api-access-64nq7" (OuterVolumeSpecName: "kube-api-access-64nq7") pod "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" (UID: "e3eaa870-a2ff-456b-89f5-6c4ee1a130e6"). InnerVolumeSpecName "kube-api-access-64nq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.022205 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96af1e8d-47e3-4e83-b034-cba3a91cbaec-kube-api-access-rnl62" (OuterVolumeSpecName: "kube-api-access-rnl62") pod "96af1e8d-47e3-4e83-b034-cba3a91cbaec" (UID: "96af1e8d-47e3-4e83-b034-cba3a91cbaec"). InnerVolumeSpecName "kube-api-access-rnl62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.024250 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96af1e8d-47e3-4e83-b034-cba3a91cbaec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "96af1e8d-47e3-4e83-b034-cba3a91cbaec" (UID: "96af1e8d-47e3-4e83-b034-cba3a91cbaec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.119240 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.119305 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.119324 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64nq7\" (UniqueName: \"kubernetes.io/projected/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-kube-api-access-64nq7\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.119346 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.119368 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnl62\" (UniqueName: \"kubernetes.io/projected/96af1e8d-47e3-4e83-b034-cba3a91cbaec-kube-api-access-rnl62\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.119385 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96af1e8d-47e3-4e83-b034-cba3a91cbaec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.119402 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96af1e8d-47e3-4e83-b034-cba3a91cbaec-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.119420 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.119436 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.466427 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw"] Feb 26 22:02:07 crc kubenswrapper[4910]: E0226 22:02:07.467059 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96af1e8d-47e3-4e83-b034-cba3a91cbaec" containerName="route-controller-manager" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.467079 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="96af1e8d-47e3-4e83-b034-cba3a91cbaec" containerName="route-controller-manager" Feb 26 22:02:07 crc kubenswrapper[4910]: E0226 22:02:07.467109 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" containerName="controller-manager" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.467122 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" containerName="controller-manager" Feb 26 22:02:07 crc kubenswrapper[4910]: E0226 22:02:07.467148 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3baaa77-7e09-4a8f-916d-eb0050324f90" containerName="oc" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.467189 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3baaa77-7e09-4a8f-916d-eb0050324f90" containerName="oc" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.467338 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" containerName="controller-manager" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.467364 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3baaa77-7e09-4a8f-916d-eb0050324f90" containerName="oc" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.467384 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="96af1e8d-47e3-4e83-b034-cba3a91cbaec" containerName="route-controller-manager" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.467964 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.478836 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s"] Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.479859 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.486548 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw"] Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.490565 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s"] Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.625548 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854ee06a-a97a-4438-b610-6ef2d640344e-serving-cert\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.625731 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-config\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.625818 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-client-ca\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.625925 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-client-ca\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.625957 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35249f9b-2f93-4676-979c-5d0a19af3f98-serving-cert\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.626124 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh2bb\" (UniqueName: \"kubernetes.io/projected/854ee06a-a97a-4438-b610-6ef2d640344e-kube-api-access-fh2bb\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.626201 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg5b6\" (UniqueName: \"kubernetes.io/projected/35249f9b-2f93-4676-979c-5d0a19af3f98-kube-api-access-gg5b6\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.626261 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-config\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.626379 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-proxy-ca-bundles\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.727021 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-config\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.727060 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-client-ca\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.727094 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-client-ca\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.727113 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35249f9b-2f93-4676-979c-5d0a19af3f98-serving-cert\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.727189 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh2bb\" (UniqueName: \"kubernetes.io/projected/854ee06a-a97a-4438-b610-6ef2d640344e-kube-api-access-fh2bb\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.727215 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg5b6\" (UniqueName: \"kubernetes.io/projected/35249f9b-2f93-4676-979c-5d0a19af3f98-kube-api-access-gg5b6\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.727244 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-config\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.727275 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-proxy-ca-bundles\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.727329 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854ee06a-a97a-4438-b610-6ef2d640344e-serving-cert\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.729015 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-config\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.729429 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-proxy-ca-bundles\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.730105 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-client-ca\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.730200 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-client-ca\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.732880 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854ee06a-a97a-4438-b610-6ef2d640344e-serving-cert\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.733629 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35249f9b-2f93-4676-979c-5d0a19af3f98-serving-cert\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.733906 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-config\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.746401 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg5b6\" (UniqueName: \"kubernetes.io/projected/35249f9b-2f93-4676-979c-5d0a19af3f98-kube-api-access-gg5b6\") pod \"controller-manager-5bfb6c7bd4-d6g8s\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.753988 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh2bb\" (UniqueName: \"kubernetes.io/projected/854ee06a-a97a-4438-b610-6ef2d640344e-kube-api-access-fh2bb\") pod \"route-controller-manager-7b8459dfc9-c4vrw\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.786310 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.792361 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.836475 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.836465 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c9975c46f-xwgvz" event={"ID":"e3eaa870-a2ff-456b-89f5-6c4ee1a130e6","Type":"ContainerDied","Data":"50ec245674f0e95d60b6d78c3c596f95996db56eddc5e818465b8f14d35e53e1"} Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.836603 4910 scope.go:117] "RemoveContainer" containerID="6a320409fdcc6128c973fb43df165f467acccfe05aa51c3652298406a6f621f2" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.839941 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" event={"ID":"96af1e8d-47e3-4e83-b034-cba3a91cbaec","Type":"ContainerDied","Data":"6196b7d7f42c32a5e3f8e7e1d2342670355201db89dfc9c22f4347e6b555f3a6"} Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.840010 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.850699 4910 scope.go:117] "RemoveContainer" containerID="a03248db5af56753410baf1f1fffc160cd68a7777c03f360b03aeb351eff0a37" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.880083 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c9975c46f-xwgvz"] Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.888556 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c9975c46f-xwgvz"] Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.895861 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh"] Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.913719 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3eaa870-a2ff-456b-89f5-6c4ee1a130e6" path="/var/lib/kubelet/pods/e3eaa870-a2ff-456b-89f5-6c4ee1a130e6/volumes" Feb 26 22:02:07 crc kubenswrapper[4910]: I0226 22:02:07.914096 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f4fd4c8-xvjvh"] Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.245081 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s"] Feb 26 22:02:08 crc kubenswrapper[4910]: W0226 22:02:08.250558 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35249f9b_2f93_4676_979c_5d0a19af3f98.slice/crio-f7d7157627fefdf9ce6acb43164e2970a9c5e590f943646a16b59e3be501efe1 WatchSource:0}: Error finding container f7d7157627fefdf9ce6acb43164e2970a9c5e590f943646a16b59e3be501efe1: Status 404 returned error can't find the container with id f7d7157627fefdf9ce6acb43164e2970a9c5e590f943646a16b59e3be501efe1 Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.311432 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw"] Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.847732 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" event={"ID":"854ee06a-a97a-4438-b610-6ef2d640344e","Type":"ContainerStarted","Data":"ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a"} Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.848356 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.848373 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" event={"ID":"854ee06a-a97a-4438-b610-6ef2d640344e","Type":"ContainerStarted","Data":"9a1e355f76ef8cec0e9030422fe500f82e7e525b5eff9e17919ff4da3ab77886"} Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.850592 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" event={"ID":"35249f9b-2f93-4676-979c-5d0a19af3f98","Type":"ContainerStarted","Data":"331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7"} Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.850613 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" event={"ID":"35249f9b-2f93-4676-979c-5d0a19af3f98","Type":"ContainerStarted","Data":"f7d7157627fefdf9ce6acb43164e2970a9c5e590f943646a16b59e3be501efe1"} Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.850890 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.853020 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.859326 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.860379 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" podStartSLOduration=2.860366196 podStartE2EDuration="2.860366196s" podCreationTimestamp="2026-02-26 22:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:02:08.860234442 +0000 UTC m=+413.939724993" watchObservedRunningTime="2026-02-26 22:02:08.860366196 +0000 UTC m=+413.939856737" Feb 26 22:02:08 crc kubenswrapper[4910]: I0226 22:02:08.895690 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" podStartSLOduration=2.895671144 podStartE2EDuration="2.895671144s" podCreationTimestamp="2026-02-26 22:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:02:08.892888598 +0000 UTC m=+413.972379139" watchObservedRunningTime="2026-02-26 22:02:08.895671144 +0000 UTC m=+413.975161685" Feb 26 22:02:09 crc kubenswrapper[4910]: I0226 22:02:09.909516 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96af1e8d-47e3-4e83-b034-cba3a91cbaec" path="/var/lib/kubelet/pods/96af1e8d-47e3-4e83-b034-cba3a91cbaec/volumes" Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.382967 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw"] Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.383814 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" podUID="854ee06a-a97a-4438-b610-6ef2d640344e" containerName="route-controller-manager" containerID="cri-o://ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a" gracePeriod=30 Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.821028 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.931628 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854ee06a-a97a-4438-b610-6ef2d640344e-serving-cert\") pod \"854ee06a-a97a-4438-b610-6ef2d640344e\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.931702 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-client-ca\") pod \"854ee06a-a97a-4438-b610-6ef2d640344e\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.931740 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh2bb\" (UniqueName: \"kubernetes.io/projected/854ee06a-a97a-4438-b610-6ef2d640344e-kube-api-access-fh2bb\") pod \"854ee06a-a97a-4438-b610-6ef2d640344e\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.931834 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-config\") pod \"854ee06a-a97a-4438-b610-6ef2d640344e\" (UID: \"854ee06a-a97a-4438-b610-6ef2d640344e\") " Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.933217 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-config" (OuterVolumeSpecName: "config") pod "854ee06a-a97a-4438-b610-6ef2d640344e" (UID: "854ee06a-a97a-4438-b610-6ef2d640344e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.933624 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-client-ca" (OuterVolumeSpecName: "client-ca") pod "854ee06a-a97a-4438-b610-6ef2d640344e" (UID: "854ee06a-a97a-4438-b610-6ef2d640344e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.938633 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854ee06a-a97a-4438-b610-6ef2d640344e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "854ee06a-a97a-4438-b610-6ef2d640344e" (UID: "854ee06a-a97a-4438-b610-6ef2d640344e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:02:46 crc kubenswrapper[4910]: I0226 22:02:46.939212 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854ee06a-a97a-4438-b610-6ef2d640344e-kube-api-access-fh2bb" (OuterVolumeSpecName: "kube-api-access-fh2bb") pod "854ee06a-a97a-4438-b610-6ef2d640344e" (UID: "854ee06a-a97a-4438-b610-6ef2d640344e"). InnerVolumeSpecName "kube-api-access-fh2bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.033724 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.033765 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/854ee06a-a97a-4438-b610-6ef2d640344e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.033782 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/854ee06a-a97a-4438-b610-6ef2d640344e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.033797 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh2bb\" (UniqueName: \"kubernetes.io/projected/854ee06a-a97a-4438-b610-6ef2d640344e-kube-api-access-fh2bb\") on node \"crc\" DevicePath \"\"" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.195126 4910 generic.go:334] "Generic (PLEG): container finished" podID="854ee06a-a97a-4438-b610-6ef2d640344e" containerID="ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a" exitCode=0 Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.195211 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.195231 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" event={"ID":"854ee06a-a97a-4438-b610-6ef2d640344e","Type":"ContainerDied","Data":"ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a"} Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.195595 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw" event={"ID":"854ee06a-a97a-4438-b610-6ef2d640344e","Type":"ContainerDied","Data":"9a1e355f76ef8cec0e9030422fe500f82e7e525b5eff9e17919ff4da3ab77886"} Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.195622 4910 scope.go:117] "RemoveContainer" containerID="ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.217761 4910 scope.go:117] "RemoveContainer" containerID="ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a" Feb 26 22:02:47 crc kubenswrapper[4910]: E0226 22:02:47.218446 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a\": container with ID starting with ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a not found: ID does not exist" containerID="ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.218508 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a"} err="failed to get container status \"ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a\": rpc error: code = NotFound desc = could not find container \"ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a\": container with ID starting with ae4bccc37613917b9b0362d72c51ea87d94575f1f39d99fe01e49e4152368a7a not found: ID does not exist" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.227608 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw"] Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.238240 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8459dfc9-c4vrw"] Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.495408 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7"] Feb 26 22:02:47 crc kubenswrapper[4910]: E0226 22:02:47.495736 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854ee06a-a97a-4438-b610-6ef2d640344e" containerName="route-controller-manager" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.495757 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="854ee06a-a97a-4438-b610-6ef2d640344e" containerName="route-controller-manager" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.495898 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="854ee06a-a97a-4438-b610-6ef2d640344e" containerName="route-controller-manager" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.496440 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.498413 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.500115 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.500476 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.501550 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.502360 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.504383 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.524733 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7"] Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.642130 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lns6s\" (UniqueName: \"kubernetes.io/projected/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-kube-api-access-lns6s\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.642264 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-client-ca\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.642321 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-serving-cert\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.642346 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-config\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.743132 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-client-ca\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.743492 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-serving-cert\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.743624 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-config\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.743788 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lns6s\" (UniqueName: \"kubernetes.io/projected/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-kube-api-access-lns6s\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.744347 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-client-ca\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.745053 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-config\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.752862 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-serving-cert\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.763908 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lns6s\" (UniqueName: \"kubernetes.io/projected/ecad7672-d2c2-4c41-84cb-d36c4fc0c362-kube-api-access-lns6s\") pod \"route-controller-manager-9f4fd4c8-zsjc7\" (UID: \"ecad7672-d2c2-4c41-84cb-d36c4fc0c362\") " pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.817756 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:47 crc kubenswrapper[4910]: I0226 22:02:47.908078 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="854ee06a-a97a-4438-b610-6ef2d640344e" path="/var/lib/kubelet/pods/854ee06a-a97a-4438-b610-6ef2d640344e/volumes" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.170845 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cnl5j"] Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.172059 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.191308 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cnl5j"] Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.245251 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7"] Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.363515 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.363597 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/15b91285-faeb-413d-bf5e-65db7b04b242-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.363636 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/15b91285-faeb-413d-bf5e-65db7b04b242-registry-tls\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.363662 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25rk\" (UniqueName: \"kubernetes.io/projected/15b91285-faeb-413d-bf5e-65db7b04b242-kube-api-access-d25rk\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.363719 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/15b91285-faeb-413d-bf5e-65db7b04b242-registry-certificates\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.363749 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15b91285-faeb-413d-bf5e-65db7b04b242-trusted-ca\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.363815 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15b91285-faeb-413d-bf5e-65db7b04b242-bound-sa-token\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.363894 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/15b91285-faeb-413d-bf5e-65db7b04b242-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.397555 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.465317 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/15b91285-faeb-413d-bf5e-65db7b04b242-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.465510 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/15b91285-faeb-413d-bf5e-65db7b04b242-registry-tls\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.465536 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d25rk\" (UniqueName: \"kubernetes.io/projected/15b91285-faeb-413d-bf5e-65db7b04b242-kube-api-access-d25rk\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.465710 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/15b91285-faeb-413d-bf5e-65db7b04b242-registry-certificates\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.465856 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15b91285-faeb-413d-bf5e-65db7b04b242-trusted-ca\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.465878 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15b91285-faeb-413d-bf5e-65db7b04b242-bound-sa-token\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.466038 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/15b91285-faeb-413d-bf5e-65db7b04b242-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.468936 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/15b91285-faeb-413d-bf5e-65db7b04b242-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.470508 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15b91285-faeb-413d-bf5e-65db7b04b242-trusted-ca\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.472733 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/15b91285-faeb-413d-bf5e-65db7b04b242-registry-certificates\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.476205 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/15b91285-faeb-413d-bf5e-65db7b04b242-registry-tls\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.480510 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/15b91285-faeb-413d-bf5e-65db7b04b242-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.501372 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d25rk\" (UniqueName: \"kubernetes.io/projected/15b91285-faeb-413d-bf5e-65db7b04b242-kube-api-access-d25rk\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.503963 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15b91285-faeb-413d-bf5e-65db7b04b242-bound-sa-token\") pod \"image-registry-66df7c8f76-cnl5j\" (UID: \"15b91285-faeb-413d-bf5e-65db7b04b242\") " pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:48 crc kubenswrapper[4910]: I0226 22:02:48.791865 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:49 crc kubenswrapper[4910]: I0226 22:02:49.211040 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cnl5j"] Feb 26 22:02:49 crc kubenswrapper[4910]: I0226 22:02:49.213886 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" event={"ID":"ecad7672-d2c2-4c41-84cb-d36c4fc0c362","Type":"ContainerStarted","Data":"dc13b8f74285fea982bbcae1a62fa446ef50b7db7f2963076100c96627ee49cb"} Feb 26 22:02:49 crc kubenswrapper[4910]: I0226 22:02:49.213917 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" event={"ID":"ecad7672-d2c2-4c41-84cb-d36c4fc0c362","Type":"ContainerStarted","Data":"2cf3f0dd5dd6a2718074627cfaa613549446f6cbbae463d27db1d9fb324d2feb"} Feb 26 22:02:49 crc kubenswrapper[4910]: I0226 22:02:49.214811 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:49 crc kubenswrapper[4910]: I0226 22:02:49.218874 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" Feb 26 22:02:49 crc kubenswrapper[4910]: I0226 22:02:49.235102 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9f4fd4c8-zsjc7" podStartSLOduration=3.235081582 podStartE2EDuration="3.235081582s" podCreationTimestamp="2026-02-26 22:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:02:49.23356809 +0000 UTC m=+454.313058681" watchObservedRunningTime="2026-02-26 22:02:49.235081582 +0000 UTC m=+454.314572123" Feb 26 22:02:50 crc kubenswrapper[4910]: I0226 22:02:50.223430 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" event={"ID":"15b91285-faeb-413d-bf5e-65db7b04b242","Type":"ContainerStarted","Data":"48463e7a68986f2a5dcb99a4827ff6756f2037f881cb21455dd3c0973de314ae"} Feb 26 22:02:50 crc kubenswrapper[4910]: I0226 22:02:50.224344 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" event={"ID":"15b91285-faeb-413d-bf5e-65db7b04b242","Type":"ContainerStarted","Data":"4afa691f6b159bb82d697dfde11dc570ca0781215e84fcbe0d5c5dfe4a04a023"} Feb 26 22:02:50 crc kubenswrapper[4910]: I0226 22:02:50.255679 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" podStartSLOduration=2.255641041 podStartE2EDuration="2.255641041s" podCreationTimestamp="2026-02-26 22:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:02:50.250028467 +0000 UTC m=+455.329519078" watchObservedRunningTime="2026-02-26 22:02:50.255641041 +0000 UTC m=+455.335131622" Feb 26 22:02:51 crc kubenswrapper[4910]: I0226 22:02:51.229192 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:02:55 crc kubenswrapper[4910]: I0226 22:02:55.727818 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:02:55 crc kubenswrapper[4910]: I0226 22:02:55.728375 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:03:03 crc kubenswrapper[4910]: I0226 22:03:03.962024 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 22:03:03 crc kubenswrapper[4910]: I0226 22:03:03.962467 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 22:03:03 crc kubenswrapper[4910]: I0226 22:03:03.963551 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 22:03:03 crc kubenswrapper[4910]: I0226 22:03:03.970537 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 22:03:04 crc kubenswrapper[4910]: I0226 22:03:04.201849 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 22:03:04 crc kubenswrapper[4910]: W0226 22:03:04.668352 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-c657e666f989bf3a97519d03bae1a42990088d963ad718c41869e1120e087616 WatchSource:0}: Error finding container c657e666f989bf3a97519d03bae1a42990088d963ad718c41869e1120e087616: Status 404 returned error can't find the container with id c657e666f989bf3a97519d03bae1a42990088d963ad718c41869e1120e087616 Feb 26 22:03:05 crc kubenswrapper[4910]: I0226 22:03:05.078953 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 22:03:05 crc kubenswrapper[4910]: I0226 22:03:05.079153 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 22:03:05 crc kubenswrapper[4910]: I0226 22:03:05.084331 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 22:03:05 crc kubenswrapper[4910]: I0226 22:03:05.084383 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 22:03:05 crc kubenswrapper[4910]: I0226 22:03:05.102568 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 22:03:05 crc kubenswrapper[4910]: I0226 22:03:05.329175 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"71730f05d13ffb2a4b870510ed7b0640b10714b9292dd96c32a1a690051bf9fb"} Feb 26 22:03:05 crc kubenswrapper[4910]: I0226 22:03:05.329638 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c657e666f989bf3a97519d03bae1a42990088d963ad718c41869e1120e087616"} Feb 26 22:03:05 crc kubenswrapper[4910]: I0226 22:03:05.337415 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 22:03:05 crc kubenswrapper[4910]: W0226 22:03:05.561340 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3a8dcf65c0f998cfc7ca5ee435b27e932875f5ccee52c0e21cbd9ec8522f98f1 WatchSource:0}: Error finding container 3a8dcf65c0f998cfc7ca5ee435b27e932875f5ccee52c0e21cbd9ec8522f98f1: Status 404 returned error can't find the container with id 3a8dcf65c0f998cfc7ca5ee435b27e932875f5ccee52c0e21cbd9ec8522f98f1 Feb 26 22:03:05 crc kubenswrapper[4910]: W0226 22:03:05.590477 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-73142a94c9aa1551295f72f9b4d29912b72eca6dd9bb33011ac4cef30fc2ccfe WatchSource:0}: Error finding container 73142a94c9aa1551295f72f9b4d29912b72eca6dd9bb33011ac4cef30fc2ccfe: Status 404 returned error can't find the container with id 73142a94c9aa1551295f72f9b4d29912b72eca6dd9bb33011ac4cef30fc2ccfe Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.332375 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s"] Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.333010 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" podUID="35249f9b-2f93-4676-979c-5d0a19af3f98" containerName="controller-manager" containerID="cri-o://331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7" gracePeriod=30 Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.337511 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b4381d638ea3f908830aa8ec9db1fc982c9802a4a85598b3dc6d01f1bc1f3447"} Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.337601 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"73142a94c9aa1551295f72f9b4d29912b72eca6dd9bb33011ac4cef30fc2ccfe"} Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.337805 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.343040 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bf92fa433d07db28ce95f1fc3e24154ab3bc0aa3c94cfe552c97b28e89e87bb8"} Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.343094 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3a8dcf65c0f998cfc7ca5ee435b27e932875f5ccee52c0e21cbd9ec8522f98f1"} Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.769921 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.900042 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-proxy-ca-bundles\") pod \"35249f9b-2f93-4676-979c-5d0a19af3f98\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.900121 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-client-ca\") pod \"35249f9b-2f93-4676-979c-5d0a19af3f98\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.900155 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-config\") pod \"35249f9b-2f93-4676-979c-5d0a19af3f98\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.900244 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg5b6\" (UniqueName: \"kubernetes.io/projected/35249f9b-2f93-4676-979c-5d0a19af3f98-kube-api-access-gg5b6\") pod \"35249f9b-2f93-4676-979c-5d0a19af3f98\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.900298 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35249f9b-2f93-4676-979c-5d0a19af3f98-serving-cert\") pod \"35249f9b-2f93-4676-979c-5d0a19af3f98\" (UID: \"35249f9b-2f93-4676-979c-5d0a19af3f98\") " Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.901129 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "35249f9b-2f93-4676-979c-5d0a19af3f98" (UID: "35249f9b-2f93-4676-979c-5d0a19af3f98"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.901297 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-client-ca" (OuterVolumeSpecName: "client-ca") pod "35249f9b-2f93-4676-979c-5d0a19af3f98" (UID: "35249f9b-2f93-4676-979c-5d0a19af3f98"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.901966 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-config" (OuterVolumeSpecName: "config") pod "35249f9b-2f93-4676-979c-5d0a19af3f98" (UID: "35249f9b-2f93-4676-979c-5d0a19af3f98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.905680 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35249f9b-2f93-4676-979c-5d0a19af3f98-kube-api-access-gg5b6" (OuterVolumeSpecName: "kube-api-access-gg5b6") pod "35249f9b-2f93-4676-979c-5d0a19af3f98" (UID: "35249f9b-2f93-4676-979c-5d0a19af3f98"). InnerVolumeSpecName "kube-api-access-gg5b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:03:06 crc kubenswrapper[4910]: I0226 22:03:06.906446 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35249f9b-2f93-4676-979c-5d0a19af3f98-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "35249f9b-2f93-4676-979c-5d0a19af3f98" (UID: "35249f9b-2f93-4676-979c-5d0a19af3f98"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.001724 4910 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.001815 4910 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.001836 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35249f9b-2f93-4676-979c-5d0a19af3f98-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.002027 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg5b6\" (UniqueName: \"kubernetes.io/projected/35249f9b-2f93-4676-979c-5d0a19af3f98-kube-api-access-gg5b6\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.002052 4910 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35249f9b-2f93-4676-979c-5d0a19af3f98-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.353127 4910 generic.go:334] "Generic (PLEG): container finished" podID="35249f9b-2f93-4676-979c-5d0a19af3f98" containerID="331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7" exitCode=0 Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.353238 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" event={"ID":"35249f9b-2f93-4676-979c-5d0a19af3f98","Type":"ContainerDied","Data":"331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7"} Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.353306 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" event={"ID":"35249f9b-2f93-4676-979c-5d0a19af3f98","Type":"ContainerDied","Data":"f7d7157627fefdf9ce6acb43164e2970a9c5e590f943646a16b59e3be501efe1"} Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.353350 4910 scope.go:117] "RemoveContainer" containerID="331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.353402 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.374436 4910 scope.go:117] "RemoveContainer" containerID="331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7" Feb 26 22:03:07 crc kubenswrapper[4910]: E0226 22:03:07.374998 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7\": container with ID starting with 331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7 not found: ID does not exist" containerID="331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.375036 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7"} err="failed to get container status \"331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7\": rpc error: code = NotFound desc = could not find container \"331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7\": container with ID starting with 331a5e02b55c186b18f410e97190b286a87f52b8fcb515f9c38dcb0317da24f7 not found: ID does not exist" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.396126 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s"] Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.398727 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bfb6c7bd4-d6g8s"] Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.511772 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c9975c46f-t9qvt"] Feb 26 22:03:07 crc kubenswrapper[4910]: E0226 22:03:07.512040 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35249f9b-2f93-4676-979c-5d0a19af3f98" containerName="controller-manager" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.512055 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="35249f9b-2f93-4676-979c-5d0a19af3f98" containerName="controller-manager" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.512194 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="35249f9b-2f93-4676-979c-5d0a19af3f98" containerName="controller-manager" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.512692 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.514980 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.515702 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.515969 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.516097 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.516949 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.517713 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.525528 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.526180 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c9975c46f-t9qvt"] Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.609951 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a5a5d50-d749-48ca-a689-2798415a1d1d-serving-cert\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.610013 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrqzx\" (UniqueName: \"kubernetes.io/projected/2a5a5d50-d749-48ca-a689-2798415a1d1d-kube-api-access-nrqzx\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.610037 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5a5d50-d749-48ca-a689-2798415a1d1d-config\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.610122 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a5a5d50-d749-48ca-a689-2798415a1d1d-client-ca\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.610257 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a5a5d50-d749-48ca-a689-2798415a1d1d-proxy-ca-bundles\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.712267 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrqzx\" (UniqueName: \"kubernetes.io/projected/2a5a5d50-d749-48ca-a689-2798415a1d1d-kube-api-access-nrqzx\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.712533 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5a5d50-d749-48ca-a689-2798415a1d1d-config\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.712574 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a5a5d50-d749-48ca-a689-2798415a1d1d-client-ca\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.712598 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a5a5d50-d749-48ca-a689-2798415a1d1d-proxy-ca-bundles\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.712702 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a5a5d50-d749-48ca-a689-2798415a1d1d-serving-cert\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.713716 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a5a5d50-d749-48ca-a689-2798415a1d1d-client-ca\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.713861 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a5a5d50-d749-48ca-a689-2798415a1d1d-proxy-ca-bundles\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.715580 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5a5d50-d749-48ca-a689-2798415a1d1d-config\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.719416 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a5a5d50-d749-48ca-a689-2798415a1d1d-serving-cert\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.727166 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrqzx\" (UniqueName: \"kubernetes.io/projected/2a5a5d50-d749-48ca-a689-2798415a1d1d-kube-api-access-nrqzx\") pod \"controller-manager-5c9975c46f-t9qvt\" (UID: \"2a5a5d50-d749-48ca-a689-2798415a1d1d\") " pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.831196 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:07 crc kubenswrapper[4910]: I0226 22:03:07.915945 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35249f9b-2f93-4676-979c-5d0a19af3f98" path="/var/lib/kubelet/pods/35249f9b-2f93-4676-979c-5d0a19af3f98/volumes" Feb 26 22:03:08 crc kubenswrapper[4910]: W0226 22:03:08.265713 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5a5d50_d749_48ca_a689_2798415a1d1d.slice/crio-52d4679a88d5ac01cc4d62034e45e6946f484013e6e27fd1b949c5dec2d19318 WatchSource:0}: Error finding container 52d4679a88d5ac01cc4d62034e45e6946f484013e6e27fd1b949c5dec2d19318: Status 404 returned error can't find the container with id 52d4679a88d5ac01cc4d62034e45e6946f484013e6e27fd1b949c5dec2d19318 Feb 26 22:03:08 crc kubenswrapper[4910]: I0226 22:03:08.265820 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c9975c46f-t9qvt"] Feb 26 22:03:08 crc kubenswrapper[4910]: I0226 22:03:08.365009 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" event={"ID":"2a5a5d50-d749-48ca-a689-2798415a1d1d","Type":"ContainerStarted","Data":"52d4679a88d5ac01cc4d62034e45e6946f484013e6e27fd1b949c5dec2d19318"} Feb 26 22:03:08 crc kubenswrapper[4910]: I0226 22:03:08.797283 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cnl5j" Feb 26 22:03:08 crc kubenswrapper[4910]: I0226 22:03:08.897465 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-298fw"] Feb 26 22:03:09 crc kubenswrapper[4910]: I0226 22:03:09.375220 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" event={"ID":"2a5a5d50-d749-48ca-a689-2798415a1d1d","Type":"ContainerStarted","Data":"38b622620fc674c1bc7743c27683894a988e04b601fa45d902305bb8238488c4"} Feb 26 22:03:09 crc kubenswrapper[4910]: I0226 22:03:09.375761 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:09 crc kubenswrapper[4910]: I0226 22:03:09.383440 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" Feb 26 22:03:09 crc kubenswrapper[4910]: I0226 22:03:09.398707 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c9975c46f-t9qvt" podStartSLOduration=3.39868655 podStartE2EDuration="3.39868655s" podCreationTimestamp="2026-02-26 22:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:03:09.39649958 +0000 UTC m=+474.475990151" watchObservedRunningTime="2026-02-26 22:03:09.39868655 +0000 UTC m=+474.478177091" Feb 26 22:03:25 crc kubenswrapper[4910]: I0226 22:03:25.728283 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:03:25 crc kubenswrapper[4910]: I0226 22:03:25.728753 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.720386 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rxsg"] Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.724601 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6rxsg" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerName="registry-server" containerID="cri-o://a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c" gracePeriod=30 Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.727108 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ggtxj"] Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.729429 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ggtxj" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerName="registry-server" containerID="cri-o://12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572" gracePeriod=30 Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.745011 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q2jtw"] Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.745422 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerName="marketplace-operator" containerID="cri-o://05ed42c662abf017f810acece6c18dd30dd7d4df6d1f899ae804a7be774b8a01" gracePeriod=30 Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.758677 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp8sv"] Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.758962 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dp8sv" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerName="registry-server" containerID="cri-o://c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790" gracePeriod=30 Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.772953 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-djrbn"] Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.773721 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.778511 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nwths"] Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.780133 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nwths" podUID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerName="registry-server" containerID="cri-o://600edb3a566f1b7c7073928f2eec5801475be41cdc76abe89c11bde4e8387bfe" gracePeriod=30 Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.801331 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-djrbn"] Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.929339 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3e33226-da7e-4023-b932-36308bb5ccad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-djrbn\" (UID: \"c3e33226-da7e-4023-b932-36308bb5ccad\") " pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.929396 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z62r\" (UniqueName: \"kubernetes.io/projected/c3e33226-da7e-4023-b932-36308bb5ccad-kube-api-access-8z62r\") pod \"marketplace-operator-79b997595-djrbn\" (UID: \"c3e33226-da7e-4023-b932-36308bb5ccad\") " pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:28 crc kubenswrapper[4910]: I0226 22:03:28.929478 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3e33226-da7e-4023-b932-36308bb5ccad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-djrbn\" (UID: \"c3e33226-da7e-4023-b932-36308bb5ccad\") " pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.030710 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3e33226-da7e-4023-b932-36308bb5ccad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-djrbn\" (UID: \"c3e33226-da7e-4023-b932-36308bb5ccad\") " pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.030801 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3e33226-da7e-4023-b932-36308bb5ccad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-djrbn\" (UID: \"c3e33226-da7e-4023-b932-36308bb5ccad\") " pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.030825 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z62r\" (UniqueName: \"kubernetes.io/projected/c3e33226-da7e-4023-b932-36308bb5ccad-kube-api-access-8z62r\") pod \"marketplace-operator-79b997595-djrbn\" (UID: \"c3e33226-da7e-4023-b932-36308bb5ccad\") " pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.033659 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3e33226-da7e-4023-b932-36308bb5ccad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-djrbn\" (UID: \"c3e33226-da7e-4023-b932-36308bb5ccad\") " pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.037796 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3e33226-da7e-4023-b932-36308bb5ccad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-djrbn\" (UID: \"c3e33226-da7e-4023-b932-36308bb5ccad\") " pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.061758 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z62r\" (UniqueName: \"kubernetes.io/projected/c3e33226-da7e-4023-b932-36308bb5ccad-kube-api-access-8z62r\") pod \"marketplace-operator-79b997595-djrbn\" (UID: \"c3e33226-da7e-4023-b932-36308bb5ccad\") " pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.215948 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.221135 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.334531 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlvvj\" (UniqueName: \"kubernetes.io/projected/4333e88f-8502-46f4-9639-7af62ff1e63c-kube-api-access-tlvvj\") pod \"4333e88f-8502-46f4-9639-7af62ff1e63c\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.334653 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-catalog-content\") pod \"4333e88f-8502-46f4-9639-7af62ff1e63c\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.334708 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-utilities\") pod \"4333e88f-8502-46f4-9639-7af62ff1e63c\" (UID: \"4333e88f-8502-46f4-9639-7af62ff1e63c\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.337488 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-utilities" (OuterVolumeSpecName: "utilities") pod "4333e88f-8502-46f4-9639-7af62ff1e63c" (UID: "4333e88f-8502-46f4-9639-7af62ff1e63c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.338891 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4333e88f-8502-46f4-9639-7af62ff1e63c-kube-api-access-tlvvj" (OuterVolumeSpecName: "kube-api-access-tlvvj") pod "4333e88f-8502-46f4-9639-7af62ff1e63c" (UID: "4333e88f-8502-46f4-9639-7af62ff1e63c"). InnerVolumeSpecName "kube-api-access-tlvvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.344738 4910 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790 is running failed: container process not found" containerID="c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.351505 4910 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790 is running failed: container process not found" containerID="c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.351847 4910 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790 is running failed: container process not found" containerID="c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.351890 4910 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-dp8sv" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerName="registry-server" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.414514 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4333e88f-8502-46f4-9639-7af62ff1e63c" (UID: "4333e88f-8502-46f4-9639-7af62ff1e63c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.435510 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlvvj\" (UniqueName: \"kubernetes.io/projected/4333e88f-8502-46f4-9639-7af62ff1e63c-kube-api-access-tlvvj\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.435533 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.435542 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4333e88f-8502-46f4-9639-7af62ff1e63c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.536650 4910 generic.go:334] "Generic (PLEG): container finished" podID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerID="600edb3a566f1b7c7073928f2eec5801475be41cdc76abe89c11bde4e8387bfe" exitCode=0 Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.536744 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwths" event={"ID":"91f7141d-853e-4d6f-9b04-ad16b61d0dc7","Type":"ContainerDied","Data":"600edb3a566f1b7c7073928f2eec5801475be41cdc76abe89c11bde4e8387bfe"} Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.541998 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggtxj" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.543307 4910 generic.go:334] "Generic (PLEG): container finished" podID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerID="05ed42c662abf017f810acece6c18dd30dd7d4df6d1f899ae804a7be774b8a01" exitCode=0 Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.543357 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" event={"ID":"dbd9e8a9-2637-4ef5-b24e-fd2d08788451","Type":"ContainerDied","Data":"05ed42c662abf017f810acece6c18dd30dd7d4df6d1f899ae804a7be774b8a01"} Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.543384 4910 scope.go:117] "RemoveContainer" containerID="fdb8b0a63263575a5c0facbd5e8ecd233b1e0fc8e32423534e4cbcc0f407e36e" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.543517 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwths" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.547746 4910 generic.go:334] "Generic (PLEG): container finished" podID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerID="c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790" exitCode=0 Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.547799 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp8sv" event={"ID":"d0c0a0be-62f6-4642-aeab-2f08a5cffedb","Type":"ContainerDied","Data":"c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790"} Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.551559 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.551915 4910 generic.go:334] "Generic (PLEG): container finished" podID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerID="12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572" exitCode=0 Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.551959 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggtxj" event={"ID":"a8d202d1-b4f6-4bc1-b633-56ba90788979","Type":"ContainerDied","Data":"12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572"} Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.551975 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggtxj" event={"ID":"a8d202d1-b4f6-4bc1-b633-56ba90788979","Type":"ContainerDied","Data":"df4d75697105f38d5778cc96e33ec269c0e1101d239d45c1a3b012e1dd0d71ab"} Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.552018 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggtxj" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.554134 4910 generic.go:334] "Generic (PLEG): container finished" podID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerID="a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c" exitCode=0 Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.554193 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rxsg" event={"ID":"4333e88f-8502-46f4-9639-7af62ff1e63c","Type":"ContainerDied","Data":"a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c"} Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.554222 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rxsg" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.554263 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rxsg" event={"ID":"4333e88f-8502-46f4-9639-7af62ff1e63c","Type":"ContainerDied","Data":"c5f2d869fc8976ebbe3a0b3ede784c971dc046527d9108e7b49129e7c237f407"} Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.564887 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.585869 4910 scope.go:117] "RemoveContainer" containerID="12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.620523 4910 scope.go:117] "RemoveContainer" containerID="240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.631452 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rxsg"] Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.634151 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6rxsg"] Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638446 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlcjx\" (UniqueName: \"kubernetes.io/projected/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-kube-api-access-tlcjx\") pod \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638489 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-catalog-content\") pod \"a8d202d1-b4f6-4bc1-b633-56ba90788979\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638541 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-catalog-content\") pod \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638579 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvqdw\" (UniqueName: \"kubernetes.io/projected/a8d202d1-b4f6-4bc1-b633-56ba90788979-kube-api-access-bvqdw\") pod \"a8d202d1-b4f6-4bc1-b633-56ba90788979\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638615 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-utilities\") pod \"a8d202d1-b4f6-4bc1-b633-56ba90788979\" (UID: \"a8d202d1-b4f6-4bc1-b633-56ba90788979\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638629 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-utilities\") pod \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638651 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssdb8\" (UniqueName: \"kubernetes.io/projected/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-kube-api-access-ssdb8\") pod \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638671 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-catalog-content\") pod \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638702 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-operator-metrics\") pod \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638720 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-utilities\") pod \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\" (UID: \"d0c0a0be-62f6-4642-aeab-2f08a5cffedb\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638751 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-trusted-ca\") pod \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\" (UID: \"dbd9e8a9-2637-4ef5-b24e-fd2d08788451\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.638791 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b5fg\" (UniqueName: \"kubernetes.io/projected/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-kube-api-access-8b5fg\") pod \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\" (UID: \"91f7141d-853e-4d6f-9b04-ad16b61d0dc7\") " Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.639849 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-utilities" (OuterVolumeSpecName: "utilities") pod "91f7141d-853e-4d6f-9b04-ad16b61d0dc7" (UID: "91f7141d-853e-4d6f-9b04-ad16b61d0dc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.641490 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dbd9e8a9-2637-4ef5-b24e-fd2d08788451" (UID: "dbd9e8a9-2637-4ef5-b24e-fd2d08788451"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.641862 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-utilities" (OuterVolumeSpecName: "utilities") pod "d0c0a0be-62f6-4642-aeab-2f08a5cffedb" (UID: "d0c0a0be-62f6-4642-aeab-2f08a5cffedb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.642248 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-kube-api-access-8b5fg" (OuterVolumeSpecName: "kube-api-access-8b5fg") pod "91f7141d-853e-4d6f-9b04-ad16b61d0dc7" (UID: "91f7141d-853e-4d6f-9b04-ad16b61d0dc7"). InnerVolumeSpecName "kube-api-access-8b5fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.643575 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-utilities" (OuterVolumeSpecName: "utilities") pod "a8d202d1-b4f6-4bc1-b633-56ba90788979" (UID: "a8d202d1-b4f6-4bc1-b633-56ba90788979"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.644011 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-kube-api-access-ssdb8" (OuterVolumeSpecName: "kube-api-access-ssdb8") pod "dbd9e8a9-2637-4ef5-b24e-fd2d08788451" (UID: "dbd9e8a9-2637-4ef5-b24e-fd2d08788451"). InnerVolumeSpecName "kube-api-access-ssdb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.646502 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d202d1-b4f6-4bc1-b633-56ba90788979-kube-api-access-bvqdw" (OuterVolumeSpecName: "kube-api-access-bvqdw") pod "a8d202d1-b4f6-4bc1-b633-56ba90788979" (UID: "a8d202d1-b4f6-4bc1-b633-56ba90788979"). InnerVolumeSpecName "kube-api-access-bvqdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.649787 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-kube-api-access-tlcjx" (OuterVolumeSpecName: "kube-api-access-tlcjx") pod "d0c0a0be-62f6-4642-aeab-2f08a5cffedb" (UID: "d0c0a0be-62f6-4642-aeab-2f08a5cffedb"). InnerVolumeSpecName "kube-api-access-tlcjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.652968 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dbd9e8a9-2637-4ef5-b24e-fd2d08788451" (UID: "dbd9e8a9-2637-4ef5-b24e-fd2d08788451"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.664047 4910 scope.go:117] "RemoveContainer" containerID="939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.675809 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0c0a0be-62f6-4642-aeab-2f08a5cffedb" (UID: "d0c0a0be-62f6-4642-aeab-2f08a5cffedb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.676822 4910 scope.go:117] "RemoveContainer" containerID="12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572" Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.677312 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572\": container with ID starting with 12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572 not found: ID does not exist" containerID="12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.677401 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572"} err="failed to get container status \"12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572\": rpc error: code = NotFound desc = could not find container \"12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572\": container with ID starting with 12b28eb37b53ae2e54c57cefcdf1f16d3f628d80f23022494faea48f00050572 not found: ID does not exist" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.677492 4910 scope.go:117] "RemoveContainer" containerID="240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526" Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.678205 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526\": container with ID starting with 240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526 not found: ID does not exist" containerID="240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.678247 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526"} err="failed to get container status \"240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526\": rpc error: code = NotFound desc = could not find container \"240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526\": container with ID starting with 240e03c90f0cbfaf99430276241d763a070f8a2b137b07e386c651e03324d526 not found: ID does not exist" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.678272 4910 scope.go:117] "RemoveContainer" containerID="939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213" Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.678555 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213\": container with ID starting with 939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213 not found: ID does not exist" containerID="939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.678570 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213"} err="failed to get container status \"939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213\": rpc error: code = NotFound desc = could not find container \"939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213\": container with ID starting with 939ec594fc4cfe6c99a2f7e2e0b03a77cccd9955afbb12c1917fe95c7081d213 not found: ID does not exist" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.678583 4910 scope.go:117] "RemoveContainer" containerID="a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.692319 4910 scope.go:117] "RemoveContainer" containerID="0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.704815 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8d202d1-b4f6-4bc1-b633-56ba90788979" (UID: "a8d202d1-b4f6-4bc1-b633-56ba90788979"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.707637 4910 scope.go:117] "RemoveContainer" containerID="efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.724833 4910 scope.go:117] "RemoveContainer" containerID="a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c" Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.725693 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c\": container with ID starting with a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c not found: ID does not exist" containerID="a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.725782 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c"} err="failed to get container status \"a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c\": rpc error: code = NotFound desc = could not find container \"a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c\": container with ID starting with a903d517f9328235923b347d9ddc4e8266196e6d92274d4b352529dfd119fc6c not found: ID does not exist" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.725857 4910 scope.go:117] "RemoveContainer" containerID="0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7" Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.726248 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7\": container with ID starting with 0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7 not found: ID does not exist" containerID="0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.726267 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7"} err="failed to get container status \"0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7\": rpc error: code = NotFound desc = could not find container \"0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7\": container with ID starting with 0d0e671b5fe648df34d4d4bf38ff2862dc377831b6eca68342cc077979e289c7 not found: ID does not exist" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.726280 4910 scope.go:117] "RemoveContainer" containerID="efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2" Feb 26 22:03:29 crc kubenswrapper[4910]: E0226 22:03:29.726567 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2\": container with ID starting with efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2 not found: ID does not exist" containerID="efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.726587 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2"} err="failed to get container status \"efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2\": rpc error: code = NotFound desc = could not find container \"efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2\": container with ID starting with efc66f02a1e864fd81a5c43277f25aa309d62f4f363e15736525e4e8fdd88ad2 not found: ID does not exist" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740320 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b5fg\" (UniqueName: \"kubernetes.io/projected/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-kube-api-access-8b5fg\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740349 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlcjx\" (UniqueName: \"kubernetes.io/projected/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-kube-api-access-tlcjx\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740360 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740369 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvqdw\" (UniqueName: \"kubernetes.io/projected/a8d202d1-b4f6-4bc1-b633-56ba90788979-kube-api-access-bvqdw\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740378 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d202d1-b4f6-4bc1-b633-56ba90788979-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740387 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740395 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssdb8\" (UniqueName: \"kubernetes.io/projected/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-kube-api-access-ssdb8\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740402 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740411 4910 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740421 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0c0a0be-62f6-4642-aeab-2f08a5cffedb-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.740429 4910 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbd9e8a9-2637-4ef5-b24e-fd2d08788451-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.770900 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91f7141d-853e-4d6f-9b04-ad16b61d0dc7" (UID: "91f7141d-853e-4d6f-9b04-ad16b61d0dc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.788151 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-djrbn"] Feb 26 22:03:29 crc kubenswrapper[4910]: W0226 22:03:29.797234 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3e33226_da7e_4023_b932_36308bb5ccad.slice/crio-60118819e30dd038211f6a5ddd3b014b6c042e2f5bb1b56c098ce9993467e067 WatchSource:0}: Error finding container 60118819e30dd038211f6a5ddd3b014b6c042e2f5bb1b56c098ce9993467e067: Status 404 returned error can't find the container with id 60118819e30dd038211f6a5ddd3b014b6c042e2f5bb1b56c098ce9993467e067 Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.841850 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f7141d-853e-4d6f-9b04-ad16b61d0dc7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.913501 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" path="/var/lib/kubelet/pods/4333e88f-8502-46f4-9639-7af62ff1e63c/volumes" Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.924323 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ggtxj"] Feb 26 22:03:29 crc kubenswrapper[4910]: I0226 22:03:29.928582 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ggtxj"] Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.564031 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwths" event={"ID":"91f7141d-853e-4d6f-9b04-ad16b61d0dc7","Type":"ContainerDied","Data":"a9ee81343f63d57c616b9d819386d61ac5da9e436e751c59954f09f9358eaee7"} Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.564113 4910 scope.go:117] "RemoveContainer" containerID="600edb3a566f1b7c7073928f2eec5801475be41cdc76abe89c11bde4e8387bfe" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.564047 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwths" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.567721 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" event={"ID":"dbd9e8a9-2637-4ef5-b24e-fd2d08788451","Type":"ContainerDied","Data":"cf131c846dc58f0f81deaff59e96ecde7e8060b4d4032a93708d21486c97361f"} Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.567814 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q2jtw" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.573362 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp8sv" event={"ID":"d0c0a0be-62f6-4642-aeab-2f08a5cffedb","Type":"ContainerDied","Data":"190601bb9091349b4dc597d5df3505f0584b6e281634b1c05905e407d7b88037"} Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.573380 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp8sv" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.576239 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" event={"ID":"c3e33226-da7e-4023-b932-36308bb5ccad","Type":"ContainerStarted","Data":"8680669575f821806b2871e39b42ebede2733af39a9077a32f9daa0fcf64ae4c"} Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.576314 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" event={"ID":"c3e33226-da7e-4023-b932-36308bb5ccad","Type":"ContainerStarted","Data":"60118819e30dd038211f6a5ddd3b014b6c042e2f5bb1b56c098ce9993467e067"} Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.576717 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.580007 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.608030 4910 scope.go:117] "RemoveContainer" containerID="9a32c0ab1b1efc9da44909eba12cfecebb982000269b345c64449b086b6f476d" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.657831 4910 scope.go:117] "RemoveContainer" containerID="5e88ae7cedf3cdb7a06a33d12a7c167d1929e9b7f321ef778d8741e1285510e3" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.657923 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q2jtw"] Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.663455 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q2jtw"] Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.681065 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp8sv"] Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.684830 4910 scope.go:117] "RemoveContainer" containerID="05ed42c662abf017f810acece6c18dd30dd7d4df6d1f899ae804a7be774b8a01" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.690266 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp8sv"] Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.693247 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nwths"] Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.696486 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nwths"] Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.702004 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-djrbn" podStartSLOduration=2.701925093 podStartE2EDuration="2.701925093s" podCreationTimestamp="2026-02-26 22:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:03:30.677069554 +0000 UTC m=+495.756560115" watchObservedRunningTime="2026-02-26 22:03:30.701925093 +0000 UTC m=+495.781415664" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.704363 4910 scope.go:117] "RemoveContainer" containerID="c0753f0ea9fdf31eeaa228511bd7cb92b4226e6a3f4379ac9a95c8aeff642790" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.717275 4910 scope.go:117] "RemoveContainer" containerID="9f132c529649b765a8d395a4cbdc787898d64afec46b1396115007d792405943" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.745473 4910 scope.go:117] "RemoveContainer" containerID="a0fa7765286d128186ff288ec7fd743dc4314157f9c0153f35941ca70421304f" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948430 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-knsd9"] Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948713 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerName="extract-content" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948733 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerName="extract-content" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948750 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerName="extract-utilities" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948761 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerName="extract-utilities" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948775 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948787 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948802 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerName="extract-utilities" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948815 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerName="extract-utilities" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948831 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerName="extract-utilities" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948842 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerName="extract-utilities" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948857 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerName="marketplace-operator" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948870 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerName="marketplace-operator" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948884 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerName="extract-content" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948895 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerName="extract-content" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948916 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerName="extract-content" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948928 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerName="extract-content" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948946 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerName="extract-content" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948957 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerName="extract-content" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.948972 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerName="marketplace-operator" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.948983 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerName="marketplace-operator" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.949000 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949011 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.949028 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949038 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.949053 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949088 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: E0226 22:03:30.949105 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerName="extract-utilities" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949116 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerName="extract-utilities" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949304 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949326 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerName="marketplace-operator" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949344 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949364 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949382 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="4333e88f-8502-46f4-9639-7af62ff1e63c" containerName="registry-server" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.949658 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" containerName="marketplace-operator" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.951362 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.955017 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knsd9"] Feb 26 22:03:30 crc kubenswrapper[4910]: I0226 22:03:30.956451 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.070719 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-catalog-content\") pod \"certified-operators-knsd9\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.070821 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5q5v\" (UniqueName: \"kubernetes.io/projected/59affc45-31f5-446b-8843-950a714f4c7d-kube-api-access-t5q5v\") pod \"certified-operators-knsd9\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.070871 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-utilities\") pod \"certified-operators-knsd9\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.138740 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jsmnt"] Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.140299 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.145118 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.146112 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jsmnt"] Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.172341 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-catalog-content\") pod \"certified-operators-knsd9\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.172387 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5q5v\" (UniqueName: \"kubernetes.io/projected/59affc45-31f5-446b-8843-950a714f4c7d-kube-api-access-t5q5v\") pod \"certified-operators-knsd9\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.172416 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-utilities\") pod \"certified-operators-knsd9\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.172788 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-catalog-content\") pod \"certified-operators-knsd9\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.172802 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-utilities\") pod \"certified-operators-knsd9\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.192187 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5q5v\" (UniqueName: \"kubernetes.io/projected/59affc45-31f5-446b-8843-950a714f4c7d-kube-api-access-t5q5v\") pod \"certified-operators-knsd9\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.273503 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.273787 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a507c42-00d6-442d-a433-4e9f89e6dedc-utilities\") pod \"community-operators-jsmnt\" (UID: \"1a507c42-00d6-442d-a433-4e9f89e6dedc\") " pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.273827 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzjr\" (UniqueName: \"kubernetes.io/projected/1a507c42-00d6-442d-a433-4e9f89e6dedc-kube-api-access-mlzjr\") pod \"community-operators-jsmnt\" (UID: \"1a507c42-00d6-442d-a433-4e9f89e6dedc\") " pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.273864 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a507c42-00d6-442d-a433-4e9f89e6dedc-catalog-content\") pod \"community-operators-jsmnt\" (UID: \"1a507c42-00d6-442d-a433-4e9f89e6dedc\") " pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.375039 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a507c42-00d6-442d-a433-4e9f89e6dedc-catalog-content\") pod \"community-operators-jsmnt\" (UID: \"1a507c42-00d6-442d-a433-4e9f89e6dedc\") " pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.375143 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a507c42-00d6-442d-a433-4e9f89e6dedc-utilities\") pod \"community-operators-jsmnt\" (UID: \"1a507c42-00d6-442d-a433-4e9f89e6dedc\") " pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.375208 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlzjr\" (UniqueName: \"kubernetes.io/projected/1a507c42-00d6-442d-a433-4e9f89e6dedc-kube-api-access-mlzjr\") pod \"community-operators-jsmnt\" (UID: \"1a507c42-00d6-442d-a433-4e9f89e6dedc\") " pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.375703 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a507c42-00d6-442d-a433-4e9f89e6dedc-catalog-content\") pod \"community-operators-jsmnt\" (UID: \"1a507c42-00d6-442d-a433-4e9f89e6dedc\") " pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.375714 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a507c42-00d6-442d-a433-4e9f89e6dedc-utilities\") pod \"community-operators-jsmnt\" (UID: \"1a507c42-00d6-442d-a433-4e9f89e6dedc\") " pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.404146 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlzjr\" (UniqueName: \"kubernetes.io/projected/1a507c42-00d6-442d-a433-4e9f89e6dedc-kube-api-access-mlzjr\") pod \"community-operators-jsmnt\" (UID: \"1a507c42-00d6-442d-a433-4e9f89e6dedc\") " pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:31 crc kubenswrapper[4910]: I0226 22:03:31.462201 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:31.705681 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knsd9"] Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:31.908342 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f7141d-853e-4d6f-9b04-ad16b61d0dc7" path="/var/lib/kubelet/pods/91f7141d-853e-4d6f-9b04-ad16b61d0dc7/volumes" Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:31.908915 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d202d1-b4f6-4bc1-b633-56ba90788979" path="/var/lib/kubelet/pods/a8d202d1-b4f6-4bc1-b633-56ba90788979/volumes" Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:31.909469 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0c0a0be-62f6-4642-aeab-2f08a5cffedb" path="/var/lib/kubelet/pods/d0c0a0be-62f6-4642-aeab-2f08a5cffedb/volumes" Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:31.910018 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbd9e8a9-2637-4ef5-b24e-fd2d08788451" path="/var/lib/kubelet/pods/dbd9e8a9-2637-4ef5-b24e-fd2d08788451/volumes" Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:32.520107 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jsmnt"] Feb 26 22:03:32 crc kubenswrapper[4910]: W0226 22:03:32.526203 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a507c42_00d6_442d_a433_4e9f89e6dedc.slice/crio-3621f10615e926c951c629595569a8f9ccc35efc8273c446738df7dd2f9f504a WatchSource:0}: Error finding container 3621f10615e926c951c629595569a8f9ccc35efc8273c446738df7dd2f9f504a: Status 404 returned error can't find the container with id 3621f10615e926c951c629595569a8f9ccc35efc8273c446738df7dd2f9f504a Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:32.604477 4910 generic.go:334] "Generic (PLEG): container finished" podID="59affc45-31f5-446b-8843-950a714f4c7d" containerID="ec788b2c869bf46d0223a62477f819574d4751e365d5debd515d2c187417fcf5" exitCode=0 Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:32.604587 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knsd9" event={"ID":"59affc45-31f5-446b-8843-950a714f4c7d","Type":"ContainerDied","Data":"ec788b2c869bf46d0223a62477f819574d4751e365d5debd515d2c187417fcf5"} Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:32.604655 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knsd9" event={"ID":"59affc45-31f5-446b-8843-950a714f4c7d","Type":"ContainerStarted","Data":"c46061e388212695bbd8a7fdf0659b9e66d623c96c06ad1f07c2d82d315f3345"} Feb 26 22:03:32 crc kubenswrapper[4910]: I0226 22:03:32.605665 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsmnt" event={"ID":"1a507c42-00d6-442d-a433-4e9f89e6dedc","Type":"ContainerStarted","Data":"3621f10615e926c951c629595569a8f9ccc35efc8273c446738df7dd2f9f504a"} Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.343955 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pm5ll"] Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.344862 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.348781 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.363523 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pm5ll"] Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.415784 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82-utilities\") pod \"redhat-marketplace-pm5ll\" (UID: \"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82\") " pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.416122 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82-catalog-content\") pod \"redhat-marketplace-pm5ll\" (UID: \"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82\") " pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.416319 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6bfj\" (UniqueName: \"kubernetes.io/projected/9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82-kube-api-access-r6bfj\") pod \"redhat-marketplace-pm5ll\" (UID: \"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82\") " pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.517899 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82-utilities\") pod \"redhat-marketplace-pm5ll\" (UID: \"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82\") " pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.518031 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82-catalog-content\") pod \"redhat-marketplace-pm5ll\" (UID: \"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82\") " pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.518102 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6bfj\" (UniqueName: \"kubernetes.io/projected/9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82-kube-api-access-r6bfj\") pod \"redhat-marketplace-pm5ll\" (UID: \"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82\") " pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.518798 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82-utilities\") pod \"redhat-marketplace-pm5ll\" (UID: \"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82\") " pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.519109 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82-catalog-content\") pod \"redhat-marketplace-pm5ll\" (UID: \"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82\") " pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.536553 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mptdj"] Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.538084 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.541754 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.548251 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6bfj\" (UniqueName: \"kubernetes.io/projected/9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82-kube-api-access-r6bfj\") pod \"redhat-marketplace-pm5ll\" (UID: \"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82\") " pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.553336 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mptdj"] Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.613571 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knsd9" event={"ID":"59affc45-31f5-446b-8843-950a714f4c7d","Type":"ContainerStarted","Data":"33d98952479d992b68c27226a167113843e81e55218cc381e216353d0fa969cd"} Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.617429 4910 generic.go:334] "Generic (PLEG): container finished" podID="1a507c42-00d6-442d-a433-4e9f89e6dedc" containerID="178617afd85774b9c963189c80360873427ab260b0ab1908078e66b9fbc1da23" exitCode=0 Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.617459 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsmnt" event={"ID":"1a507c42-00d6-442d-a433-4e9f89e6dedc","Type":"ContainerDied","Data":"178617afd85774b9c963189c80360873427ab260b0ab1908078e66b9fbc1da23"} Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.619493 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a823f571-9d20-4be8-b0e2-6c71d5437ddf-utilities\") pod \"redhat-operators-mptdj\" (UID: \"a823f571-9d20-4be8-b0e2-6c71d5437ddf\") " pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.619530 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnkm9\" (UniqueName: \"kubernetes.io/projected/a823f571-9d20-4be8-b0e2-6c71d5437ddf-kube-api-access-qnkm9\") pod \"redhat-operators-mptdj\" (UID: \"a823f571-9d20-4be8-b0e2-6c71d5437ddf\") " pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.619577 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a823f571-9d20-4be8-b0e2-6c71d5437ddf-catalog-content\") pod \"redhat-operators-mptdj\" (UID: \"a823f571-9d20-4be8-b0e2-6c71d5437ddf\") " pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.728572 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnkm9\" (UniqueName: \"kubernetes.io/projected/a823f571-9d20-4be8-b0e2-6c71d5437ddf-kube-api-access-qnkm9\") pod \"redhat-operators-mptdj\" (UID: \"a823f571-9d20-4be8-b0e2-6c71d5437ddf\") " pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.728655 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a823f571-9d20-4be8-b0e2-6c71d5437ddf-catalog-content\") pod \"redhat-operators-mptdj\" (UID: \"a823f571-9d20-4be8-b0e2-6c71d5437ddf\") " pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.728744 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a823f571-9d20-4be8-b0e2-6c71d5437ddf-utilities\") pod \"redhat-operators-mptdj\" (UID: \"a823f571-9d20-4be8-b0e2-6c71d5437ddf\") " pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.729330 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a823f571-9d20-4be8-b0e2-6c71d5437ddf-catalog-content\") pod \"redhat-operators-mptdj\" (UID: \"a823f571-9d20-4be8-b0e2-6c71d5437ddf\") " pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.729628 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a823f571-9d20-4be8-b0e2-6c71d5437ddf-utilities\") pod \"redhat-operators-mptdj\" (UID: \"a823f571-9d20-4be8-b0e2-6c71d5437ddf\") " pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.746622 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnkm9\" (UniqueName: \"kubernetes.io/projected/a823f571-9d20-4be8-b0e2-6c71d5437ddf-kube-api-access-qnkm9\") pod \"redhat-operators-mptdj\" (UID: \"a823f571-9d20-4be8-b0e2-6c71d5437ddf\") " pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.755611 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.885630 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:33 crc kubenswrapper[4910]: I0226 22:03:33.939890 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" podUID="b050f320-6f26-4c79-88cc-ceb481369169" containerName="registry" containerID="cri-o://b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2" gracePeriod=30 Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.145007 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pm5ll"] Feb 26 22:03:34 crc kubenswrapper[4910]: W0226 22:03:34.153315 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ee55d7c_7dac_4d67_9c5f_de60ebb6ad82.slice/crio-3f17c2c7f60ea562d63b79711cf6b5a16b729d012cb2865678d26cb896b42912 WatchSource:0}: Error finding container 3f17c2c7f60ea562d63b79711cf6b5a16b729d012cb2865678d26cb896b42912: Status 404 returned error can't find the container with id 3f17c2c7f60ea562d63b79711cf6b5a16b729d012cb2865678d26cb896b42912 Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.268214 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mptdj"] Feb 26 22:03:34 crc kubenswrapper[4910]: W0226 22:03:34.288319 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda823f571_9d20_4be8_b0e2_6c71d5437ddf.slice/crio-5706eb2527bf92020241d5e555d679bfa3f167bce59bdcf8cb58e9d4617aae66 WatchSource:0}: Error finding container 5706eb2527bf92020241d5e555d679bfa3f167bce59bdcf8cb58e9d4617aae66: Status 404 returned error can't find the container with id 5706eb2527bf92020241d5e555d679bfa3f167bce59bdcf8cb58e9d4617aae66 Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.418602 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.537709 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-registry-certificates\") pod \"b050f320-6f26-4c79-88cc-ceb481369169\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.537786 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8gqb\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-kube-api-access-q8gqb\") pod \"b050f320-6f26-4c79-88cc-ceb481369169\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.537841 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b050f320-6f26-4c79-88cc-ceb481369169-installation-pull-secrets\") pod \"b050f320-6f26-4c79-88cc-ceb481369169\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.537900 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b050f320-6f26-4c79-88cc-ceb481369169-ca-trust-extracted\") pod \"b050f320-6f26-4c79-88cc-ceb481369169\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.537960 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-bound-sa-token\") pod \"b050f320-6f26-4c79-88cc-ceb481369169\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.538121 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b050f320-6f26-4c79-88cc-ceb481369169\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.538225 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-registry-tls\") pod \"b050f320-6f26-4c79-88cc-ceb481369169\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.538275 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-trusted-ca\") pod \"b050f320-6f26-4c79-88cc-ceb481369169\" (UID: \"b050f320-6f26-4c79-88cc-ceb481369169\") " Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.540011 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b050f320-6f26-4c79-88cc-ceb481369169" (UID: "b050f320-6f26-4c79-88cc-ceb481369169"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.540304 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b050f320-6f26-4c79-88cc-ceb481369169" (UID: "b050f320-6f26-4c79-88cc-ceb481369169"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.546269 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b050f320-6f26-4c79-88cc-ceb481369169-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b050f320-6f26-4c79-88cc-ceb481369169" (UID: "b050f320-6f26-4c79-88cc-ceb481369169"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.546355 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-kube-api-access-q8gqb" (OuterVolumeSpecName: "kube-api-access-q8gqb") pod "b050f320-6f26-4c79-88cc-ceb481369169" (UID: "b050f320-6f26-4c79-88cc-ceb481369169"). InnerVolumeSpecName "kube-api-access-q8gqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.547133 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b050f320-6f26-4c79-88cc-ceb481369169" (UID: "b050f320-6f26-4c79-88cc-ceb481369169"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.547198 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b050f320-6f26-4c79-88cc-ceb481369169" (UID: "b050f320-6f26-4c79-88cc-ceb481369169"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.558628 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b050f320-6f26-4c79-88cc-ceb481369169" (UID: "b050f320-6f26-4c79-88cc-ceb481369169"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.563601 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b050f320-6f26-4c79-88cc-ceb481369169-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b050f320-6f26-4c79-88cc-ceb481369169" (UID: "b050f320-6f26-4c79-88cc-ceb481369169"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.624060 4910 generic.go:334] "Generic (PLEG): container finished" podID="59affc45-31f5-446b-8843-950a714f4c7d" containerID="33d98952479d992b68c27226a167113843e81e55218cc381e216353d0fa969cd" exitCode=0 Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.624146 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knsd9" event={"ID":"59affc45-31f5-446b-8843-950a714f4c7d","Type":"ContainerDied","Data":"33d98952479d992b68c27226a167113843e81e55218cc381e216353d0fa969cd"} Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.633206 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsmnt" event={"ID":"1a507c42-00d6-442d-a433-4e9f89e6dedc","Type":"ContainerStarted","Data":"1b3e9fcbbed6c01c6db5032e8c32edc19a491561a3a18faebdbe1363a65dddfb"} Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.635292 4910 generic.go:334] "Generic (PLEG): container finished" podID="9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82" containerID="95bb3bc9033f67b91a81dcf1f1a164391e5e2e2c78ca8e3e02c1bb02727b82cd" exitCode=0 Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.635430 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pm5ll" event={"ID":"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82","Type":"ContainerDied","Data":"95bb3bc9033f67b91a81dcf1f1a164391e5e2e2c78ca8e3e02c1bb02727b82cd"} Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.635495 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pm5ll" event={"ID":"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82","Type":"ContainerStarted","Data":"3f17c2c7f60ea562d63b79711cf6b5a16b729d012cb2865678d26cb896b42912"} Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.636821 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.636822 4910 generic.go:334] "Generic (PLEG): container finished" podID="b050f320-6f26-4c79-88cc-ceb481369169" containerID="b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2" exitCode=0 Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.636888 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" event={"ID":"b050f320-6f26-4c79-88cc-ceb481369169","Type":"ContainerDied","Data":"b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2"} Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.637210 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-298fw" event={"ID":"b050f320-6f26-4c79-88cc-ceb481369169","Type":"ContainerDied","Data":"8234023c0abae29e4e9fa47f6f2c2ceefa0fce54555f1c0cd57f5ac166c85d92"} Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.637238 4910 scope.go:117] "RemoveContainer" containerID="b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639098 4910 generic.go:334] "Generic (PLEG): container finished" podID="a823f571-9d20-4be8-b0e2-6c71d5437ddf" containerID="ab396f4ba92faa13d0d9e13d01654c37570349648fac4171ade3eaa8e1681338" exitCode=0 Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639144 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mptdj" event={"ID":"a823f571-9d20-4be8-b0e2-6c71d5437ddf","Type":"ContainerDied","Data":"ab396f4ba92faa13d0d9e13d01654c37570349648fac4171ade3eaa8e1681338"} Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639339 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mptdj" event={"ID":"a823f571-9d20-4be8-b0e2-6c71d5437ddf","Type":"ContainerStarted","Data":"5706eb2527bf92020241d5e555d679bfa3f167bce59bdcf8cb58e9d4617aae66"} Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639273 4910 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639396 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8gqb\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-kube-api-access-q8gqb\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639415 4910 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b050f320-6f26-4c79-88cc-ceb481369169-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639433 4910 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b050f320-6f26-4c79-88cc-ceb481369169-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639450 4910 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639466 4910 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b050f320-6f26-4c79-88cc-ceb481369169-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.639481 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b050f320-6f26-4c79-88cc-ceb481369169-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.652569 4910 scope.go:117] "RemoveContainer" containerID="b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2" Feb 26 22:03:34 crc kubenswrapper[4910]: E0226 22:03:34.654508 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2\": container with ID starting with b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2 not found: ID does not exist" containerID="b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.654536 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2"} err="failed to get container status \"b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2\": rpc error: code = NotFound desc = could not find container \"b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2\": container with ID starting with b9bba1068aaa72ef589dfd5238ccf4e9eeb2f9c6684cd2ac3f58b87608ef01b2 not found: ID does not exist" Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.738512 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-298fw"] Feb 26 22:03:34 crc kubenswrapper[4910]: I0226 22:03:34.743232 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-298fw"] Feb 26 22:03:35 crc kubenswrapper[4910]: I0226 22:03:35.344246 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 22:03:35 crc kubenswrapper[4910]: I0226 22:03:35.648468 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mptdj" event={"ID":"a823f571-9d20-4be8-b0e2-6c71d5437ddf","Type":"ContainerStarted","Data":"e27ca00aee9ba9a6b1da524e08666b08ad0a8ab7bc73328a4356e5b66188960b"} Feb 26 22:03:35 crc kubenswrapper[4910]: I0226 22:03:35.650936 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knsd9" event={"ID":"59affc45-31f5-446b-8843-950a714f4c7d","Type":"ContainerStarted","Data":"9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed"} Feb 26 22:03:35 crc kubenswrapper[4910]: I0226 22:03:35.652779 4910 generic.go:334] "Generic (PLEG): container finished" podID="1a507c42-00d6-442d-a433-4e9f89e6dedc" containerID="1b3e9fcbbed6c01c6db5032e8c32edc19a491561a3a18faebdbe1363a65dddfb" exitCode=0 Feb 26 22:03:35 crc kubenswrapper[4910]: I0226 22:03:35.652810 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsmnt" event={"ID":"1a507c42-00d6-442d-a433-4e9f89e6dedc","Type":"ContainerDied","Data":"1b3e9fcbbed6c01c6db5032e8c32edc19a491561a3a18faebdbe1363a65dddfb"} Feb 26 22:03:35 crc kubenswrapper[4910]: I0226 22:03:35.654827 4910 generic.go:334] "Generic (PLEG): container finished" podID="9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82" containerID="7985fdda844d873f4093d4b2d5c1332edd92a7090ff2d973f82e0f6488dc2b07" exitCode=0 Feb 26 22:03:35 crc kubenswrapper[4910]: I0226 22:03:35.654883 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pm5ll" event={"ID":"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82","Type":"ContainerDied","Data":"7985fdda844d873f4093d4b2d5c1332edd92a7090ff2d973f82e0f6488dc2b07"} Feb 26 22:03:35 crc kubenswrapper[4910]: I0226 22:03:35.766900 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-knsd9" podStartSLOduration=3.353265581 podStartE2EDuration="5.766877172s" podCreationTimestamp="2026-02-26 22:03:30 +0000 UTC" firstStartedPulling="2026-02-26 22:03:32.606931969 +0000 UTC m=+497.686422510" lastFinishedPulling="2026-02-26 22:03:35.02054354 +0000 UTC m=+500.100034101" observedRunningTime="2026-02-26 22:03:35.764700713 +0000 UTC m=+500.844191254" watchObservedRunningTime="2026-02-26 22:03:35.766877172 +0000 UTC m=+500.846367713" Feb 26 22:03:35 crc kubenswrapper[4910]: I0226 22:03:35.907940 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b050f320-6f26-4c79-88cc-ceb481369169" path="/var/lib/kubelet/pods/b050f320-6f26-4c79-88cc-ceb481369169/volumes" Feb 26 22:03:36 crc kubenswrapper[4910]: I0226 22:03:36.672341 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsmnt" event={"ID":"1a507c42-00d6-442d-a433-4e9f89e6dedc","Type":"ContainerStarted","Data":"5556166b1bcf870274a0aed4e7e8da4a91fcc9ed1d32cb38a0a80638ac87dccd"} Feb 26 22:03:36 crc kubenswrapper[4910]: I0226 22:03:36.676838 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pm5ll" event={"ID":"9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82","Type":"ContainerStarted","Data":"068c01f8b3e954456fd44520181fdc571732c2b69528152b112018ff737e9974"} Feb 26 22:03:36 crc kubenswrapper[4910]: I0226 22:03:36.679295 4910 generic.go:334] "Generic (PLEG): container finished" podID="a823f571-9d20-4be8-b0e2-6c71d5437ddf" containerID="e27ca00aee9ba9a6b1da524e08666b08ad0a8ab7bc73328a4356e5b66188960b" exitCode=0 Feb 26 22:03:36 crc kubenswrapper[4910]: I0226 22:03:36.679971 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mptdj" event={"ID":"a823f571-9d20-4be8-b0e2-6c71d5437ddf","Type":"ContainerDied","Data":"e27ca00aee9ba9a6b1da524e08666b08ad0a8ab7bc73328a4356e5b66188960b"} Feb 26 22:03:36 crc kubenswrapper[4910]: I0226 22:03:36.713042 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jsmnt" podStartSLOduration=3.198364334 podStartE2EDuration="5.713024117s" podCreationTimestamp="2026-02-26 22:03:31 +0000 UTC" firstStartedPulling="2026-02-26 22:03:33.629991857 +0000 UTC m=+498.709482408" lastFinishedPulling="2026-02-26 22:03:36.14465161 +0000 UTC m=+501.224142191" observedRunningTime="2026-02-26 22:03:36.693827862 +0000 UTC m=+501.773318433" watchObservedRunningTime="2026-02-26 22:03:36.713024117 +0000 UTC m=+501.792514658" Feb 26 22:03:36 crc kubenswrapper[4910]: I0226 22:03:36.736297 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pm5ll" podStartSLOduration=2.272122648 podStartE2EDuration="3.736279603s" podCreationTimestamp="2026-02-26 22:03:33 +0000 UTC" firstStartedPulling="2026-02-26 22:03:34.655428069 +0000 UTC m=+499.734918620" lastFinishedPulling="2026-02-26 22:03:36.119585034 +0000 UTC m=+501.199075575" observedRunningTime="2026-02-26 22:03:36.713605623 +0000 UTC m=+501.793096164" watchObservedRunningTime="2026-02-26 22:03:36.736279603 +0000 UTC m=+501.815770154" Feb 26 22:03:37 crc kubenswrapper[4910]: I0226 22:03:37.703142 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mptdj" event={"ID":"a823f571-9d20-4be8-b0e2-6c71d5437ddf","Type":"ContainerStarted","Data":"4b708e8c8cac467fc797106c3638a264aa95b322c794f50705c75660a1bcfd94"} Feb 26 22:03:41 crc kubenswrapper[4910]: I0226 22:03:41.274479 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:41 crc kubenswrapper[4910]: I0226 22:03:41.274875 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:41 crc kubenswrapper[4910]: I0226 22:03:41.343254 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:41 crc kubenswrapper[4910]: I0226 22:03:41.361866 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mptdj" podStartSLOduration=5.95372508 podStartE2EDuration="8.36184859s" podCreationTimestamp="2026-02-26 22:03:33 +0000 UTC" firstStartedPulling="2026-02-26 22:03:34.641249071 +0000 UTC m=+499.720739652" lastFinishedPulling="2026-02-26 22:03:37.049372621 +0000 UTC m=+502.128863162" observedRunningTime="2026-02-26 22:03:37.721704251 +0000 UTC m=+502.801194792" watchObservedRunningTime="2026-02-26 22:03:41.36184859 +0000 UTC m=+506.441339131" Feb 26 22:03:41 crc kubenswrapper[4910]: I0226 22:03:41.463098 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:41 crc kubenswrapper[4910]: I0226 22:03:41.463138 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:41 crc kubenswrapper[4910]: I0226 22:03:41.497710 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:41 crc kubenswrapper[4910]: I0226 22:03:41.794053 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jsmnt" Feb 26 22:03:41 crc kubenswrapper[4910]: I0226 22:03:41.805793 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:03:43 crc kubenswrapper[4910]: I0226 22:03:43.756317 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:43 crc kubenswrapper[4910]: I0226 22:03:43.756370 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:43 crc kubenswrapper[4910]: I0226 22:03:43.805444 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:43 crc kubenswrapper[4910]: I0226 22:03:43.886771 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:43 crc kubenswrapper[4910]: I0226 22:03:43.887032 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:44 crc kubenswrapper[4910]: I0226 22:03:44.831744 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pm5ll" Feb 26 22:03:44 crc kubenswrapper[4910]: I0226 22:03:44.921805 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mptdj" podUID="a823f571-9d20-4be8-b0e2-6c71d5437ddf" containerName="registry-server" probeResult="failure" output=< Feb 26 22:03:44 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:03:44 crc kubenswrapper[4910]: > Feb 26 22:03:53 crc kubenswrapper[4910]: I0226 22:03:53.945318 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:54 crc kubenswrapper[4910]: I0226 22:03:54.007384 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mptdj" Feb 26 22:03:55 crc kubenswrapper[4910]: I0226 22:03:55.727862 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:03:55 crc kubenswrapper[4910]: I0226 22:03:55.727920 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:03:55 crc kubenswrapper[4910]: I0226 22:03:55.727963 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:03:55 crc kubenswrapper[4910]: I0226 22:03:55.728453 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64124dfbd3fd0964011ae7c39d92177145f45ba34946932fdb21e2ba093e20f6"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:03:55 crc kubenswrapper[4910]: I0226 22:03:55.728501 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://64124dfbd3fd0964011ae7c39d92177145f45ba34946932fdb21e2ba093e20f6" gracePeriod=600 Feb 26 22:03:56 crc kubenswrapper[4910]: I0226 22:03:56.829123 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="64124dfbd3fd0964011ae7c39d92177145f45ba34946932fdb21e2ba093e20f6" exitCode=0 Feb 26 22:03:56 crc kubenswrapper[4910]: I0226 22:03:56.829900 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"64124dfbd3fd0964011ae7c39d92177145f45ba34946932fdb21e2ba093e20f6"} Feb 26 22:03:56 crc kubenswrapper[4910]: I0226 22:03:56.832015 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"3d1f6db3407c868dd446acc51c6527a528eccc7a840c7986ab6697c6b21634ed"} Feb 26 22:03:56 crc kubenswrapper[4910]: I0226 22:03:56.832123 4910 scope.go:117] "RemoveContainer" containerID="22d075543a397b11a63e25912605cb14bee4deda66939088572c64d019de782b" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.136094 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535724-9h82v"] Feb 26 22:04:00 crc kubenswrapper[4910]: E0226 22:04:00.137908 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b050f320-6f26-4c79-88cc-ceb481369169" containerName="registry" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.137929 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="b050f320-6f26-4c79-88cc-ceb481369169" containerName="registry" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.138046 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="b050f320-6f26-4c79-88cc-ceb481369169" containerName="registry" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.138513 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535724-9h82v" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.140373 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.140543 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.140774 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.143865 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535724-9h82v"] Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.260777 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtk8j\" (UniqueName: \"kubernetes.io/projected/416e31f3-d776-49a3-87b4-93e1b5224277-kube-api-access-vtk8j\") pod \"auto-csr-approver-29535724-9h82v\" (UID: \"416e31f3-d776-49a3-87b4-93e1b5224277\") " pod="openshift-infra/auto-csr-approver-29535724-9h82v" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.361748 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtk8j\" (UniqueName: \"kubernetes.io/projected/416e31f3-d776-49a3-87b4-93e1b5224277-kube-api-access-vtk8j\") pod \"auto-csr-approver-29535724-9h82v\" (UID: \"416e31f3-d776-49a3-87b4-93e1b5224277\") " pod="openshift-infra/auto-csr-approver-29535724-9h82v" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.391798 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtk8j\" (UniqueName: \"kubernetes.io/projected/416e31f3-d776-49a3-87b4-93e1b5224277-kube-api-access-vtk8j\") pod \"auto-csr-approver-29535724-9h82v\" (UID: \"416e31f3-d776-49a3-87b4-93e1b5224277\") " pod="openshift-infra/auto-csr-approver-29535724-9h82v" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.461670 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535724-9h82v" Feb 26 22:04:00 crc kubenswrapper[4910]: I0226 22:04:00.893617 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535724-9h82v"] Feb 26 22:04:00 crc kubenswrapper[4910]: W0226 22:04:00.900842 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod416e31f3_d776_49a3_87b4_93e1b5224277.slice/crio-616e7f75cbad59dc8a4ca6e51a70786f371c1333533f4eb4cd9d246d528a78c4 WatchSource:0}: Error finding container 616e7f75cbad59dc8a4ca6e51a70786f371c1333533f4eb4cd9d246d528a78c4: Status 404 returned error can't find the container with id 616e7f75cbad59dc8a4ca6e51a70786f371c1333533f4eb4cd9d246d528a78c4 Feb 26 22:04:01 crc kubenswrapper[4910]: I0226 22:04:01.869612 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535724-9h82v" event={"ID":"416e31f3-d776-49a3-87b4-93e1b5224277","Type":"ContainerStarted","Data":"616e7f75cbad59dc8a4ca6e51a70786f371c1333533f4eb4cd9d246d528a78c4"} Feb 26 22:04:02 crc kubenswrapper[4910]: I0226 22:04:02.876117 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535724-9h82v" event={"ID":"416e31f3-d776-49a3-87b4-93e1b5224277","Type":"ContainerStarted","Data":"f061dfcf5c00d7aee7ab315a83af62774581b9f473a6ef1c82b63f92bc067538"} Feb 26 22:04:02 crc kubenswrapper[4910]: I0226 22:04:02.889422 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535724-9h82v" podStartSLOduration=1.874988651 podStartE2EDuration="2.889401004s" podCreationTimestamp="2026-02-26 22:04:00 +0000 UTC" firstStartedPulling="2026-02-26 22:04:00.904390009 +0000 UTC m=+525.983880550" lastFinishedPulling="2026-02-26 22:04:01.918802342 +0000 UTC m=+526.998292903" observedRunningTime="2026-02-26 22:04:02.889069796 +0000 UTC m=+527.968560337" watchObservedRunningTime="2026-02-26 22:04:02.889401004 +0000 UTC m=+527.968891545" Feb 26 22:04:03 crc kubenswrapper[4910]: I0226 22:04:03.882307 4910 generic.go:334] "Generic (PLEG): container finished" podID="416e31f3-d776-49a3-87b4-93e1b5224277" containerID="f061dfcf5c00d7aee7ab315a83af62774581b9f473a6ef1c82b63f92bc067538" exitCode=0 Feb 26 22:04:03 crc kubenswrapper[4910]: I0226 22:04:03.883208 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535724-9h82v" event={"ID":"416e31f3-d776-49a3-87b4-93e1b5224277","Type":"ContainerDied","Data":"f061dfcf5c00d7aee7ab315a83af62774581b9f473a6ef1c82b63f92bc067538"} Feb 26 22:04:05 crc kubenswrapper[4910]: I0226 22:04:05.188145 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535724-9h82v" Feb 26 22:04:05 crc kubenswrapper[4910]: I0226 22:04:05.321495 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtk8j\" (UniqueName: \"kubernetes.io/projected/416e31f3-d776-49a3-87b4-93e1b5224277-kube-api-access-vtk8j\") pod \"416e31f3-d776-49a3-87b4-93e1b5224277\" (UID: \"416e31f3-d776-49a3-87b4-93e1b5224277\") " Feb 26 22:04:05 crc kubenswrapper[4910]: I0226 22:04:05.340312 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416e31f3-d776-49a3-87b4-93e1b5224277-kube-api-access-vtk8j" (OuterVolumeSpecName: "kube-api-access-vtk8j") pod "416e31f3-d776-49a3-87b4-93e1b5224277" (UID: "416e31f3-d776-49a3-87b4-93e1b5224277"). InnerVolumeSpecName "kube-api-access-vtk8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:04:05 crc kubenswrapper[4910]: I0226 22:04:05.423061 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtk8j\" (UniqueName: \"kubernetes.io/projected/416e31f3-d776-49a3-87b4-93e1b5224277-kube-api-access-vtk8j\") on node \"crc\" DevicePath \"\"" Feb 26 22:04:05 crc kubenswrapper[4910]: I0226 22:04:05.899615 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535724-9h82v" event={"ID":"416e31f3-d776-49a3-87b4-93e1b5224277","Type":"ContainerDied","Data":"616e7f75cbad59dc8a4ca6e51a70786f371c1333533f4eb4cd9d246d528a78c4"} Feb 26 22:04:05 crc kubenswrapper[4910]: I0226 22:04:05.899664 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="616e7f75cbad59dc8a4ca6e51a70786f371c1333533f4eb4cd9d246d528a78c4" Feb 26 22:04:05 crc kubenswrapper[4910]: I0226 22:04:05.899715 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535724-9h82v" Feb 26 22:04:05 crc kubenswrapper[4910]: I0226 22:04:05.967765 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535718-4rxms"] Feb 26 22:04:05 crc kubenswrapper[4910]: I0226 22:04:05.971449 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535718-4rxms"] Feb 26 22:04:07 crc kubenswrapper[4910]: I0226 22:04:07.913351 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05" path="/var/lib/kubelet/pods/e8fe4d9f-ec8c-4d29-a7e6-1534270d5d05/volumes" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.149443 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535726-2qlhw"] Feb 26 22:06:00 crc kubenswrapper[4910]: E0226 22:06:00.150597 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416e31f3-d776-49a3-87b4-93e1b5224277" containerName="oc" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.150626 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="416e31f3-d776-49a3-87b4-93e1b5224277" containerName="oc" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.150854 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="416e31f3-d776-49a3-87b4-93e1b5224277" containerName="oc" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.151743 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535726-2qlhw" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.155885 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.156199 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.156394 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.165123 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535726-2qlhw"] Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.232212 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtd5\" (UniqueName: \"kubernetes.io/projected/39bacf0b-95ed-4a36-a354-1cf17b887fcd-kube-api-access-9wtd5\") pod \"auto-csr-approver-29535726-2qlhw\" (UID: \"39bacf0b-95ed-4a36-a354-1cf17b887fcd\") " pod="openshift-infra/auto-csr-approver-29535726-2qlhw" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.333718 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wtd5\" (UniqueName: \"kubernetes.io/projected/39bacf0b-95ed-4a36-a354-1cf17b887fcd-kube-api-access-9wtd5\") pod \"auto-csr-approver-29535726-2qlhw\" (UID: \"39bacf0b-95ed-4a36-a354-1cf17b887fcd\") " pod="openshift-infra/auto-csr-approver-29535726-2qlhw" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.371956 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wtd5\" (UniqueName: \"kubernetes.io/projected/39bacf0b-95ed-4a36-a354-1cf17b887fcd-kube-api-access-9wtd5\") pod \"auto-csr-approver-29535726-2qlhw\" (UID: \"39bacf0b-95ed-4a36-a354-1cf17b887fcd\") " pod="openshift-infra/auto-csr-approver-29535726-2qlhw" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.488853 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535726-2qlhw" Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.794471 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535726-2qlhw"] Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.798722 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:06:00 crc kubenswrapper[4910]: I0226 22:06:00.916502 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535726-2qlhw" event={"ID":"39bacf0b-95ed-4a36-a354-1cf17b887fcd","Type":"ContainerStarted","Data":"5de1fd25c6e407797e82289241fba249d95efbd582ca7af26509349cff47a9d5"} Feb 26 22:06:02 crc kubenswrapper[4910]: I0226 22:06:02.931623 4910 generic.go:334] "Generic (PLEG): container finished" podID="39bacf0b-95ed-4a36-a354-1cf17b887fcd" containerID="c4d9c64e1036b69d288e49e55594d2a3c40a1a80fbec88c170d2b72ba621bc2f" exitCode=0 Feb 26 22:06:02 crc kubenswrapper[4910]: I0226 22:06:02.931712 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535726-2qlhw" event={"ID":"39bacf0b-95ed-4a36-a354-1cf17b887fcd","Type":"ContainerDied","Data":"c4d9c64e1036b69d288e49e55594d2a3c40a1a80fbec88c170d2b72ba621bc2f"} Feb 26 22:06:04 crc kubenswrapper[4910]: I0226 22:06:04.225012 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535726-2qlhw" Feb 26 22:06:04 crc kubenswrapper[4910]: I0226 22:06:04.379815 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wtd5\" (UniqueName: \"kubernetes.io/projected/39bacf0b-95ed-4a36-a354-1cf17b887fcd-kube-api-access-9wtd5\") pod \"39bacf0b-95ed-4a36-a354-1cf17b887fcd\" (UID: \"39bacf0b-95ed-4a36-a354-1cf17b887fcd\") " Feb 26 22:06:04 crc kubenswrapper[4910]: I0226 22:06:04.390576 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bacf0b-95ed-4a36-a354-1cf17b887fcd-kube-api-access-9wtd5" (OuterVolumeSpecName: "kube-api-access-9wtd5") pod "39bacf0b-95ed-4a36-a354-1cf17b887fcd" (UID: "39bacf0b-95ed-4a36-a354-1cf17b887fcd"). InnerVolumeSpecName "kube-api-access-9wtd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:06:04 crc kubenswrapper[4910]: I0226 22:06:04.481426 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wtd5\" (UniqueName: \"kubernetes.io/projected/39bacf0b-95ed-4a36-a354-1cf17b887fcd-kube-api-access-9wtd5\") on node \"crc\" DevicePath \"\"" Feb 26 22:06:04 crc kubenswrapper[4910]: I0226 22:06:04.949070 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535726-2qlhw" event={"ID":"39bacf0b-95ed-4a36-a354-1cf17b887fcd","Type":"ContainerDied","Data":"5de1fd25c6e407797e82289241fba249d95efbd582ca7af26509349cff47a9d5"} Feb 26 22:06:04 crc kubenswrapper[4910]: I0226 22:06:04.949141 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5de1fd25c6e407797e82289241fba249d95efbd582ca7af26509349cff47a9d5" Feb 26 22:06:04 crc kubenswrapper[4910]: I0226 22:06:04.949185 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535726-2qlhw" Feb 26 22:06:05 crc kubenswrapper[4910]: I0226 22:06:05.305853 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535720-2nnrc"] Feb 26 22:06:05 crc kubenswrapper[4910]: I0226 22:06:05.308955 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535720-2nnrc"] Feb 26 22:06:05 crc kubenswrapper[4910]: I0226 22:06:05.907306 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02081600-34a4-4d71-ab04-6214092a36f1" path="/var/lib/kubelet/pods/02081600-34a4-4d71-ab04-6214092a36f1/volumes" Feb 26 22:06:16 crc kubenswrapper[4910]: I0226 22:06:16.327791 4910 scope.go:117] "RemoveContainer" containerID="6b7341c3b2fc08aec3a4cbd8c76970c4c96bf68bf35afcd947d4459e75bb407c" Feb 26 22:06:16 crc kubenswrapper[4910]: I0226 22:06:16.372222 4910 scope.go:117] "RemoveContainer" containerID="1ef6b7e58b3ecf8b8a3fb213e7b305e0e3ac49b86f4b924ba9474d31a09ebe25" Feb 26 22:06:25 crc kubenswrapper[4910]: I0226 22:06:25.727777 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:06:25 crc kubenswrapper[4910]: I0226 22:06:25.728392 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:06:55 crc kubenswrapper[4910]: I0226 22:06:55.727228 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:06:55 crc kubenswrapper[4910]: I0226 22:06:55.727874 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:07:16 crc kubenswrapper[4910]: I0226 22:07:16.452493 4910 scope.go:117] "RemoveContainer" containerID="b6bc19b116902a7f9f3a2149637da68a961e4f49fe980796c3aea82835548a48" Feb 26 22:07:25 crc kubenswrapper[4910]: I0226 22:07:25.727269 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:07:25 crc kubenswrapper[4910]: I0226 22:07:25.728378 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:07:25 crc kubenswrapper[4910]: I0226 22:07:25.728466 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:07:25 crc kubenswrapper[4910]: I0226 22:07:25.729574 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d1f6db3407c868dd446acc51c6527a528eccc7a840c7986ab6697c6b21634ed"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:07:25 crc kubenswrapper[4910]: I0226 22:07:25.729673 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://3d1f6db3407c868dd446acc51c6527a528eccc7a840c7986ab6697c6b21634ed" gracePeriod=600 Feb 26 22:07:26 crc kubenswrapper[4910]: I0226 22:07:26.517062 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="3d1f6db3407c868dd446acc51c6527a528eccc7a840c7986ab6697c6b21634ed" exitCode=0 Feb 26 22:07:26 crc kubenswrapper[4910]: I0226 22:07:26.517129 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"3d1f6db3407c868dd446acc51c6527a528eccc7a840c7986ab6697c6b21634ed"} Feb 26 22:07:26 crc kubenswrapper[4910]: I0226 22:07:26.517815 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"b8aa69230a8076fd0ec023976cd59eeefa38746d90bf2e2c7d3f40e007a0afc9"} Feb 26 22:07:26 crc kubenswrapper[4910]: I0226 22:07:26.517840 4910 scope.go:117] "RemoveContainer" containerID="64124dfbd3fd0964011ae7c39d92177145f45ba34946932fdb21e2ba093e20f6" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.141022 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535728-5rcf6"] Feb 26 22:08:00 crc kubenswrapper[4910]: E0226 22:08:00.141889 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bacf0b-95ed-4a36-a354-1cf17b887fcd" containerName="oc" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.141911 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bacf0b-95ed-4a36-a354-1cf17b887fcd" containerName="oc" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.142096 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bacf0b-95ed-4a36-a354-1cf17b887fcd" containerName="oc" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.142640 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535728-5rcf6" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.144946 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.146069 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.147242 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535728-5rcf6"] Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.151424 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.252779 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwclh\" (UniqueName: \"kubernetes.io/projected/1de787e8-4b1f-457c-baa4-fa07b7b56e73-kube-api-access-rwclh\") pod \"auto-csr-approver-29535728-5rcf6\" (UID: \"1de787e8-4b1f-457c-baa4-fa07b7b56e73\") " pod="openshift-infra/auto-csr-approver-29535728-5rcf6" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.356024 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwclh\" (UniqueName: \"kubernetes.io/projected/1de787e8-4b1f-457c-baa4-fa07b7b56e73-kube-api-access-rwclh\") pod \"auto-csr-approver-29535728-5rcf6\" (UID: \"1de787e8-4b1f-457c-baa4-fa07b7b56e73\") " pod="openshift-infra/auto-csr-approver-29535728-5rcf6" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.395708 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwclh\" (UniqueName: \"kubernetes.io/projected/1de787e8-4b1f-457c-baa4-fa07b7b56e73-kube-api-access-rwclh\") pod \"auto-csr-approver-29535728-5rcf6\" (UID: \"1de787e8-4b1f-457c-baa4-fa07b7b56e73\") " pod="openshift-infra/auto-csr-approver-29535728-5rcf6" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.491286 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535728-5rcf6" Feb 26 22:08:00 crc kubenswrapper[4910]: I0226 22:08:00.910277 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535728-5rcf6"] Feb 26 22:08:00 crc kubenswrapper[4910]: W0226 22:08:00.916781 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1de787e8_4b1f_457c_baa4_fa07b7b56e73.slice/crio-c1f364a49125872af628059a434395da10f03430e054b4d9fd5f59c234314900 WatchSource:0}: Error finding container c1f364a49125872af628059a434395da10f03430e054b4d9fd5f59c234314900: Status 404 returned error can't find the container with id c1f364a49125872af628059a434395da10f03430e054b4d9fd5f59c234314900 Feb 26 22:08:01 crc kubenswrapper[4910]: I0226 22:08:01.786842 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535728-5rcf6" event={"ID":"1de787e8-4b1f-457c-baa4-fa07b7b56e73","Type":"ContainerStarted","Data":"c1f364a49125872af628059a434395da10f03430e054b4d9fd5f59c234314900"} Feb 26 22:08:02 crc kubenswrapper[4910]: I0226 22:08:02.796439 4910 generic.go:334] "Generic (PLEG): container finished" podID="1de787e8-4b1f-457c-baa4-fa07b7b56e73" containerID="2d75876cea56b32846486fadf0e58dce016c09c7703aaa9f82d1e34b6fe08f5a" exitCode=0 Feb 26 22:08:02 crc kubenswrapper[4910]: I0226 22:08:02.796870 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535728-5rcf6" event={"ID":"1de787e8-4b1f-457c-baa4-fa07b7b56e73","Type":"ContainerDied","Data":"2d75876cea56b32846486fadf0e58dce016c09c7703aaa9f82d1e34b6fe08f5a"} Feb 26 22:08:04 crc kubenswrapper[4910]: I0226 22:08:04.116779 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535728-5rcf6" Feb 26 22:08:04 crc kubenswrapper[4910]: I0226 22:08:04.205993 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwclh\" (UniqueName: \"kubernetes.io/projected/1de787e8-4b1f-457c-baa4-fa07b7b56e73-kube-api-access-rwclh\") pod \"1de787e8-4b1f-457c-baa4-fa07b7b56e73\" (UID: \"1de787e8-4b1f-457c-baa4-fa07b7b56e73\") " Feb 26 22:08:04 crc kubenswrapper[4910]: I0226 22:08:04.222509 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de787e8-4b1f-457c-baa4-fa07b7b56e73-kube-api-access-rwclh" (OuterVolumeSpecName: "kube-api-access-rwclh") pod "1de787e8-4b1f-457c-baa4-fa07b7b56e73" (UID: "1de787e8-4b1f-457c-baa4-fa07b7b56e73"). InnerVolumeSpecName "kube-api-access-rwclh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:08:04 crc kubenswrapper[4910]: I0226 22:08:04.308116 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwclh\" (UniqueName: \"kubernetes.io/projected/1de787e8-4b1f-457c-baa4-fa07b7b56e73-kube-api-access-rwclh\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:04 crc kubenswrapper[4910]: I0226 22:08:04.813522 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535728-5rcf6" event={"ID":"1de787e8-4b1f-457c-baa4-fa07b7b56e73","Type":"ContainerDied","Data":"c1f364a49125872af628059a434395da10f03430e054b4d9fd5f59c234314900"} Feb 26 22:08:04 crc kubenswrapper[4910]: I0226 22:08:04.813573 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535728-5rcf6" Feb 26 22:08:04 crc kubenswrapper[4910]: I0226 22:08:04.813580 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f364a49125872af628059a434395da10f03430e054b4d9fd5f59c234314900" Feb 26 22:08:05 crc kubenswrapper[4910]: I0226 22:08:05.199111 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535722-svlnd"] Feb 26 22:08:05 crc kubenswrapper[4910]: I0226 22:08:05.208062 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535722-svlnd"] Feb 26 22:08:05 crc kubenswrapper[4910]: I0226 22:08:05.915134 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3baaa77-7e09-4a8f-916d-eb0050324f90" path="/var/lib/kubelet/pods/d3baaa77-7e09-4a8f-916d-eb0050324f90/volumes" Feb 26 22:08:16 crc kubenswrapper[4910]: I0226 22:08:16.503125 4910 scope.go:117] "RemoveContainer" containerID="08d0726daa5a9f9f60fee46195a703703e49b83d5ee2989aab8d4f31a2e7be69" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.375197 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq"] Feb 26 22:08:35 crc kubenswrapper[4910]: E0226 22:08:35.376001 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de787e8-4b1f-457c-baa4-fa07b7b56e73" containerName="oc" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.376015 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de787e8-4b1f-457c-baa4-fa07b7b56e73" containerName="oc" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.376144 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de787e8-4b1f-457c-baa4-fa07b7b56e73" containerName="oc" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.376975 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.379670 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.393807 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq"] Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.521620 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.521810 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.521926 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxgw8\" (UniqueName: \"kubernetes.io/projected/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-kube-api-access-vxgw8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.623675 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.623721 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxgw8\" (UniqueName: \"kubernetes.io/projected/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-kube-api-access-vxgw8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.623777 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.624142 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.624153 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.651650 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxgw8\" (UniqueName: \"kubernetes.io/projected/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-kube-api-access-vxgw8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.705456 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:35 crc kubenswrapper[4910]: I0226 22:08:35.924751 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq"] Feb 26 22:08:36 crc kubenswrapper[4910]: I0226 22:08:36.055325 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" event={"ID":"3e61dd1c-2bee-4f26-bb96-aa07cce78d28","Type":"ContainerStarted","Data":"5a236a3955c9c64eaae655889405c2937422adbe6310cd3376a42754b41c5619"} Feb 26 22:08:37 crc kubenswrapper[4910]: I0226 22:08:37.062996 4910 generic.go:334] "Generic (PLEG): container finished" podID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerID="d499647f182c455221202684167831f6f8126d34776e516969b45532aa795af6" exitCode=0 Feb 26 22:08:37 crc kubenswrapper[4910]: I0226 22:08:37.063124 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" event={"ID":"3e61dd1c-2bee-4f26-bb96-aa07cce78d28","Type":"ContainerDied","Data":"d499647f182c455221202684167831f6f8126d34776e516969b45532aa795af6"} Feb 26 22:08:39 crc kubenswrapper[4910]: I0226 22:08:39.078686 4910 generic.go:334] "Generic (PLEG): container finished" podID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerID="67d8ba31e5908788f1c7a679e3ed6d0ce17477493e3ee9760eb5895350934386" exitCode=0 Feb 26 22:08:39 crc kubenswrapper[4910]: I0226 22:08:39.078745 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" event={"ID":"3e61dd1c-2bee-4f26-bb96-aa07cce78d28","Type":"ContainerDied","Data":"67d8ba31e5908788f1c7a679e3ed6d0ce17477493e3ee9760eb5895350934386"} Feb 26 22:08:40 crc kubenswrapper[4910]: I0226 22:08:40.099095 4910 generic.go:334] "Generic (PLEG): container finished" podID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerID="d07317acf2f44c223e7e67434229d6294bb08becd88122f351801b20e15ba59e" exitCode=0 Feb 26 22:08:40 crc kubenswrapper[4910]: I0226 22:08:40.099244 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" event={"ID":"3e61dd1c-2bee-4f26-bb96-aa07cce78d28","Type":"ContainerDied","Data":"d07317acf2f44c223e7e67434229d6294bb08becd88122f351801b20e15ba59e"} Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.374541 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.505285 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-bundle\") pod \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.505416 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-util\") pod \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.505459 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxgw8\" (UniqueName: \"kubernetes.io/projected/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-kube-api-access-vxgw8\") pod \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\" (UID: \"3e61dd1c-2bee-4f26-bb96-aa07cce78d28\") " Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.509075 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-bundle" (OuterVolumeSpecName: "bundle") pod "3e61dd1c-2bee-4f26-bb96-aa07cce78d28" (UID: "3e61dd1c-2bee-4f26-bb96-aa07cce78d28"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.514036 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-kube-api-access-vxgw8" (OuterVolumeSpecName: "kube-api-access-vxgw8") pod "3e61dd1c-2bee-4f26-bb96-aa07cce78d28" (UID: "3e61dd1c-2bee-4f26-bb96-aa07cce78d28"). InnerVolumeSpecName "kube-api-access-vxgw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.520345 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-util" (OuterVolumeSpecName: "util") pod "3e61dd1c-2bee-4f26-bb96-aa07cce78d28" (UID: "3e61dd1c-2bee-4f26-bb96-aa07cce78d28"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.606680 4910 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-util\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.606711 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxgw8\" (UniqueName: \"kubernetes.io/projected/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-kube-api-access-vxgw8\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:41 crc kubenswrapper[4910]: I0226 22:08:41.606721 4910 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3e61dd1c-2bee-4f26-bb96-aa07cce78d28-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:42 crc kubenswrapper[4910]: I0226 22:08:42.114067 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" event={"ID":"3e61dd1c-2bee-4f26-bb96-aa07cce78d28","Type":"ContainerDied","Data":"5a236a3955c9c64eaae655889405c2937422adbe6310cd3376a42754b41c5619"} Feb 26 22:08:42 crc kubenswrapper[4910]: I0226 22:08:42.114134 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a236a3955c9c64eaae655889405c2937422adbe6310cd3376a42754b41c5619" Feb 26 22:08:42 crc kubenswrapper[4910]: I0226 22:08:42.114149 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq" Feb 26 22:08:46 crc kubenswrapper[4910]: I0226 22:08:46.877205 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrq4q"] Feb 26 22:08:46 crc kubenswrapper[4910]: I0226 22:08:46.878392 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovn-controller" containerID="cri-o://b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca" gracePeriod=30 Feb 26 22:08:46 crc kubenswrapper[4910]: I0226 22:08:46.878879 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="sbdb" containerID="cri-o://3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19" gracePeriod=30 Feb 26 22:08:46 crc kubenswrapper[4910]: I0226 22:08:46.878952 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="nbdb" containerID="cri-o://492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a" gracePeriod=30 Feb 26 22:08:46 crc kubenswrapper[4910]: I0226 22:08:46.879008 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="northd" containerID="cri-o://c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401" gracePeriod=30 Feb 26 22:08:46 crc kubenswrapper[4910]: I0226 22:08:46.879061 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c" gracePeriod=30 Feb 26 22:08:46 crc kubenswrapper[4910]: I0226 22:08:46.879117 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kube-rbac-proxy-node" containerID="cri-o://e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2" gracePeriod=30 Feb 26 22:08:46 crc kubenswrapper[4910]: I0226 22:08:46.879209 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovn-acl-logging" containerID="cri-o://c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad" gracePeriod=30 Feb 26 22:08:46 crc kubenswrapper[4910]: I0226 22:08:46.934293 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" containerID="cri-o://19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579" gracePeriod=30 Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.153520 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovnkube-controller/3.log" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.155896 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovn-acl-logging/0.log" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.156406 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovn-controller/0.log" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.156972 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579" exitCode=0 Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157013 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c" exitCode=0 Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157028 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2" exitCode=0 Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157044 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad" exitCode=143 Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157059 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca" exitCode=143 Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157066 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579"} Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157120 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c"} Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157150 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2"} Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157180 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad"} Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157197 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca"} Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.157218 4910 scope.go:117] "RemoveContainer" containerID="c102af0022666f948e5923ebd19de21279aaf7635387dd3036f2f7cde045de43" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.159246 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/2.log" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.160074 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/1.log" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.160114 4910 generic.go:334] "Generic (PLEG): container finished" podID="d78660ec-f27f-43be-add6-8fab38329537" containerID="0206f2babef31f4c9359fa5e49447fa3c2c463f5dfd690dac95da1a45bea19e3" exitCode=2 Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.160143 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-795gt" event={"ID":"d78660ec-f27f-43be-add6-8fab38329537","Type":"ContainerDied","Data":"0206f2babef31f4c9359fa5e49447fa3c2c463f5dfd690dac95da1a45bea19e3"} Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.160622 4910 scope.go:117] "RemoveContainer" containerID="0206f2babef31f4c9359fa5e49447fa3c2c463f5dfd690dac95da1a45bea19e3" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.160844 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-795gt_openshift-multus(d78660ec-f27f-43be-add6-8fab38329537)\"" pod="openshift-multus/multus-795gt" podUID="d78660ec-f27f-43be-add6-8fab38329537" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.252923 4910 scope.go:117] "RemoveContainer" containerID="3f88b7ea31f447ea3a2728e5c1543d2c60f64d949b0a4f14fbb8a9253a768faf" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.274821 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovn-acl-logging/0.log" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.275625 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovn-controller/0.log" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.276039 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358496 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5mvh9"] Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358714 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358731 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358740 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kube-rbac-proxy-node" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358746 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kube-rbac-proxy-node" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358755 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerName="util" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358761 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerName="util" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358767 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kubecfg-setup" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358773 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kubecfg-setup" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358783 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="sbdb" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358789 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="sbdb" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358799 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358805 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358815 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="nbdb" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358823 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="nbdb" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358832 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerName="pull" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358838 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerName="pull" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358848 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358857 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358865 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358871 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358881 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358888 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358897 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="northd" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358903 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="northd" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358912 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovn-acl-logging" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358919 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovn-acl-logging" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358930 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerName="extract" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358937 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerName="extract" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.358948 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovn-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.358956 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovn-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359053 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359066 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359076 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovn-acl-logging" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359084 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e61dd1c-2bee-4f26-bb96-aa07cce78d28" containerName="extract" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359095 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="northd" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359103 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359113 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="nbdb" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359120 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359131 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359140 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="sbdb" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359149 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="kube-rbac-proxy-node" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359175 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovn-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: E0226 22:08:47.359282 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359292 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.359388 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerName="ovnkube-controller" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.361239 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.386807 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-bin\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.386864 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-config\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.386918 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-netns\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.386917 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.386946 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-systemd\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.386972 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.386993 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-openvswitch\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387021 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovn-node-metrics-cert\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387075 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387098 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-slash\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387145 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txf8k\" (UniqueName: \"kubernetes.io/projected/41cb54c7-260b-42d4-8ae9-cf2a195721be-kube-api-access-txf8k\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387209 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-node-log\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387225 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-kubelet\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387243 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-env-overrides\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387313 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-etc-openvswitch\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387336 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-netd\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387333 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387365 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-ovn-kubernetes\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387402 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-systemd-units\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387417 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-log-socket\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387442 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-ovn\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387482 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-script-lib\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387508 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-var-lib-openvswitch\") pod \"41cb54c7-260b-42d4-8ae9-cf2a195721be\" (UID: \"41cb54c7-260b-42d4-8ae9-cf2a195721be\") " Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387509 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387689 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387760 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387762 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387791 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-node-log" (OuterVolumeSpecName: "node-log") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387808 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387824 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387828 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-log-socket" (OuterVolumeSpecName: "log-socket") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387851 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387860 4910 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387878 4910 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387827 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387875 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387874 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-slash" (OuterVolumeSpecName: "host-slash") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387934 4910 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387958 4910 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.387971 4910 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.388245 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.388302 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.399495 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.399601 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41cb54c7-260b-42d4-8ae9-cf2a195721be-kube-api-access-txf8k" (OuterVolumeSpecName: "kube-api-access-txf8k") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "kube-api-access-txf8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.411617 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "41cb54c7-260b-42d4-8ae9-cf2a195721be" (UID: "41cb54c7-260b-42d4-8ae9-cf2a195721be"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489468 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-cni-netd\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489515 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-run-ovn-kubernetes\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489532 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-log-socket\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489549 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-var-lib-openvswitch\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489568 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-ovnkube-config\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489585 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-cni-bin\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489708 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzrxj\" (UniqueName: \"kubernetes.io/projected/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-kube-api-access-vzrxj\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489778 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-slash\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489867 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-run-ovn\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489909 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-ovn-node-metrics-cert\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489940 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-run-openvswitch\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.489986 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-run-netns\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490005 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-systemd-units\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490025 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-ovnkube-script-lib\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490094 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-kubelet\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490144 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490240 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-run-systemd\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490262 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-env-overrides\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490330 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-node-log\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490362 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-etc-openvswitch\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490518 4910 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490557 4910 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-log-socket\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490571 4910 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490583 4910 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490596 4910 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490610 4910 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490640 4910 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41cb54c7-260b-42d4-8ae9-cf2a195721be-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490654 4910 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-slash\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490665 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txf8k\" (UniqueName: \"kubernetes.io/projected/41cb54c7-260b-42d4-8ae9-cf2a195721be-kube-api-access-txf8k\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490676 4910 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-node-log\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490686 4910 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490717 4910 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41cb54c7-260b-42d4-8ae9-cf2a195721be-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490730 4910 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490740 4910 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.490752 4910 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41cb54c7-260b-42d4-8ae9-cf2a195721be-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592561 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-kubelet\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592618 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592644 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-run-systemd\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592670 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-env-overrides\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592678 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-kubelet\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592689 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-node-log\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592717 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-node-log\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592801 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592740 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-etc-openvswitch\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592856 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-cni-netd\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592885 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-run-ovn-kubernetes\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592907 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-log-socket\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592926 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-var-lib-openvswitch\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592945 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-ovnkube-config\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592966 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-cni-bin\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592986 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzrxj\" (UniqueName: \"kubernetes.io/projected/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-kube-api-access-vzrxj\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593006 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-slash\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593420 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-log-socket\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593482 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-var-lib-openvswitch\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593437 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-cni-netd\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592780 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-run-systemd\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.592760 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-etc-openvswitch\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593550 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-env-overrides\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593579 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-run-ovn-kubernetes\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593462 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-cni-bin\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593637 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-slash\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593791 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-run-ovn\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593974 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-ovnkube-config\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.593031 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-run-ovn\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.594020 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-ovn-node-metrics-cert\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.594040 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-run-openvswitch\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.594073 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-run-netns\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.594089 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-systemd-units\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.594107 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-ovnkube-script-lib\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.594412 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-run-openvswitch\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.594456 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-host-run-netns\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.594467 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-systemd-units\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.595038 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-ovnkube-script-lib\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.596841 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-ovn-node-metrics-cert\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.625629 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzrxj\" (UniqueName: \"kubernetes.io/projected/9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8-kube-api-access-vzrxj\") pod \"ovnkube-node-5mvh9\" (UID: \"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:47 crc kubenswrapper[4910]: I0226 22:08:47.676464 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.171615 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/2.log" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.183911 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovn-acl-logging/0.log" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.184869 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrq4q_41cb54c7-260b-42d4-8ae9-cf2a195721be/ovn-controller/0.log" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.185307 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19" exitCode=0 Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.185339 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a" exitCode=0 Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.185349 4910 generic.go:334] "Generic (PLEG): container finished" podID="41cb54c7-260b-42d4-8ae9-cf2a195721be" containerID="c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401" exitCode=0 Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.185409 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19"} Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.185432 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a"} Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.185443 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401"} Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.185452 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" event={"ID":"41cb54c7-260b-42d4-8ae9-cf2a195721be","Type":"ContainerDied","Data":"ffc2ffb6487c21bc6ccd96c92051381ae04b9deb2812c7183ae4b33cb5c81e05"} Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.185468 4910 scope.go:117] "RemoveContainer" containerID="19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.185647 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrq4q" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.194598 4910 generic.go:334] "Generic (PLEG): container finished" podID="9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8" containerID="cc83e795d2ba991bd44dafb436935e3ab4b01505972de1ecadc9b94ac571c5f6" exitCode=0 Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.194634 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerDied","Data":"cc83e795d2ba991bd44dafb436935e3ab4b01505972de1ecadc9b94ac571c5f6"} Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.194656 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerStarted","Data":"3b77e946479b48b0189785486b21ec4ce7d4efe5470cad43fde678898d3dcbb3"} Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.204559 4910 scope.go:117] "RemoveContainer" containerID="3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.214888 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrq4q"] Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.221815 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrq4q"] Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.229417 4910 scope.go:117] "RemoveContainer" containerID="492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.257619 4910 scope.go:117] "RemoveContainer" containerID="c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.290544 4910 scope.go:117] "RemoveContainer" containerID="454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.305074 4910 scope.go:117] "RemoveContainer" containerID="e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.322426 4910 scope.go:117] "RemoveContainer" containerID="c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.336411 4910 scope.go:117] "RemoveContainer" containerID="b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.355734 4910 scope.go:117] "RemoveContainer" containerID="4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.374455 4910 scope.go:117] "RemoveContainer" containerID="19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579" Feb 26 22:08:48 crc kubenswrapper[4910]: E0226 22:08:48.377656 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579\": container with ID starting with 19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579 not found: ID does not exist" containerID="19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.377688 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579"} err="failed to get container status \"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579\": rpc error: code = NotFound desc = could not find container \"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579\": container with ID starting with 19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.377726 4910 scope.go:117] "RemoveContainer" containerID="3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19" Feb 26 22:08:48 crc kubenswrapper[4910]: E0226 22:08:48.383043 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\": container with ID starting with 3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19 not found: ID does not exist" containerID="3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.383094 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19"} err="failed to get container status \"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\": rpc error: code = NotFound desc = could not find container \"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\": container with ID starting with 3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.383121 4910 scope.go:117] "RemoveContainer" containerID="492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a" Feb 26 22:08:48 crc kubenswrapper[4910]: E0226 22:08:48.384059 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\": container with ID starting with 492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a not found: ID does not exist" containerID="492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.384326 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a"} err="failed to get container status \"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\": rpc error: code = NotFound desc = could not find container \"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\": container with ID starting with 492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.384338 4910 scope.go:117] "RemoveContainer" containerID="c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401" Feb 26 22:08:48 crc kubenswrapper[4910]: E0226 22:08:48.384586 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\": container with ID starting with c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401 not found: ID does not exist" containerID="c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.384606 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401"} err="failed to get container status \"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\": rpc error: code = NotFound desc = could not find container \"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\": container with ID starting with c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.384627 4910 scope.go:117] "RemoveContainer" containerID="454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c" Feb 26 22:08:48 crc kubenswrapper[4910]: E0226 22:08:48.385289 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\": container with ID starting with 454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c not found: ID does not exist" containerID="454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.385312 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c"} err="failed to get container status \"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\": rpc error: code = NotFound desc = could not find container \"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\": container with ID starting with 454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.385325 4910 scope.go:117] "RemoveContainer" containerID="e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2" Feb 26 22:08:48 crc kubenswrapper[4910]: E0226 22:08:48.386135 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\": container with ID starting with e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2 not found: ID does not exist" containerID="e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.386178 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2"} err="failed to get container status \"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\": rpc error: code = NotFound desc = could not find container \"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\": container with ID starting with e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.386191 4910 scope.go:117] "RemoveContainer" containerID="c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad" Feb 26 22:08:48 crc kubenswrapper[4910]: E0226 22:08:48.386415 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\": container with ID starting with c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad not found: ID does not exist" containerID="c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.386438 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad"} err="failed to get container status \"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\": rpc error: code = NotFound desc = could not find container \"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\": container with ID starting with c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.386450 4910 scope.go:117] "RemoveContainer" containerID="b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca" Feb 26 22:08:48 crc kubenswrapper[4910]: E0226 22:08:48.386659 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\": container with ID starting with b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca not found: ID does not exist" containerID="b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.386689 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca"} err="failed to get container status \"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\": rpc error: code = NotFound desc = could not find container \"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\": container with ID starting with b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.386705 4910 scope.go:117] "RemoveContainer" containerID="4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50" Feb 26 22:08:48 crc kubenswrapper[4910]: E0226 22:08:48.386989 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\": container with ID starting with 4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50 not found: ID does not exist" containerID="4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.387029 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50"} err="failed to get container status \"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\": rpc error: code = NotFound desc = could not find container \"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\": container with ID starting with 4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.387057 4910 scope.go:117] "RemoveContainer" containerID="19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.387368 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579"} err="failed to get container status \"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579\": rpc error: code = NotFound desc = could not find container \"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579\": container with ID starting with 19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.387388 4910 scope.go:117] "RemoveContainer" containerID="3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.387958 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19"} err="failed to get container status \"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\": rpc error: code = NotFound desc = could not find container \"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\": container with ID starting with 3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.387996 4910 scope.go:117] "RemoveContainer" containerID="492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.388279 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a"} err="failed to get container status \"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\": rpc error: code = NotFound desc = could not find container \"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\": container with ID starting with 492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.388302 4910 scope.go:117] "RemoveContainer" containerID="c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.388616 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401"} err="failed to get container status \"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\": rpc error: code = NotFound desc = could not find container \"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\": container with ID starting with c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.388635 4910 scope.go:117] "RemoveContainer" containerID="454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.388853 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c"} err="failed to get container status \"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\": rpc error: code = NotFound desc = could not find container \"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\": container with ID starting with 454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.388872 4910 scope.go:117] "RemoveContainer" containerID="e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.389215 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2"} err="failed to get container status \"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\": rpc error: code = NotFound desc = could not find container \"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\": container with ID starting with e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.389236 4910 scope.go:117] "RemoveContainer" containerID="c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.389614 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad"} err="failed to get container status \"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\": rpc error: code = NotFound desc = could not find container \"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\": container with ID starting with c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.389637 4910 scope.go:117] "RemoveContainer" containerID="b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.389848 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca"} err="failed to get container status \"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\": rpc error: code = NotFound desc = could not find container \"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\": container with ID starting with b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.389867 4910 scope.go:117] "RemoveContainer" containerID="4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.392463 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50"} err="failed to get container status \"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\": rpc error: code = NotFound desc = could not find container \"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\": container with ID starting with 4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.392489 4910 scope.go:117] "RemoveContainer" containerID="19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.392742 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579"} err="failed to get container status \"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579\": rpc error: code = NotFound desc = could not find container \"19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579\": container with ID starting with 19a3ebdc18c75b48e597b50681570c7243c24e8fccebcc02dba6868f95c4b579 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.392769 4910 scope.go:117] "RemoveContainer" containerID="3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.392964 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19"} err="failed to get container status \"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\": rpc error: code = NotFound desc = could not find container \"3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19\": container with ID starting with 3aacb36dbe6be1bbf4c7b8e620be923a9167413c86b4cb01c31b677018010b19 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.392986 4910 scope.go:117] "RemoveContainer" containerID="492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.393229 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a"} err="failed to get container status \"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\": rpc error: code = NotFound desc = could not find container \"492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a\": container with ID starting with 492955b632c9273c73cf13452a0d6288f6892cbef5c18b5cc52296500f5ec11a not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.393248 4910 scope.go:117] "RemoveContainer" containerID="c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.393505 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401"} err="failed to get container status \"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\": rpc error: code = NotFound desc = could not find container \"c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401\": container with ID starting with c0faa604d63f2892accd12ec834e53cd0ed43c01e405a357a271aa8239e7e401 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.393524 4910 scope.go:117] "RemoveContainer" containerID="454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.395265 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c"} err="failed to get container status \"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\": rpc error: code = NotFound desc = could not find container \"454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c\": container with ID starting with 454bbd96277651569ac1d77789f566971a6ca16c890fe11b7252ac6795f4f71c not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.395306 4910 scope.go:117] "RemoveContainer" containerID="e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.395629 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2"} err="failed to get container status \"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\": rpc error: code = NotFound desc = could not find container \"e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2\": container with ID starting with e0a1d4e7984512f56b2194f1978e77037997fa71b96c4dba96ef0459cc450df2 not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.395651 4910 scope.go:117] "RemoveContainer" containerID="c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.395904 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad"} err="failed to get container status \"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\": rpc error: code = NotFound desc = could not find container \"c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad\": container with ID starting with c95140794842e09344b98c1053a4447be36a545212c085afb4f25c78789b3aad not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.395928 4910 scope.go:117] "RemoveContainer" containerID="b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.396138 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca"} err="failed to get container status \"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\": rpc error: code = NotFound desc = could not find container \"b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca\": container with ID starting with b39b9920081edafa5e6534d74fc9b6753584753404d000227ccf4f6a6309c0ca not found: ID does not exist" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.396195 4910 scope.go:117] "RemoveContainer" containerID="4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50" Feb 26 22:08:48 crc kubenswrapper[4910]: I0226 22:08:48.396424 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50"} err="failed to get container status \"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\": rpc error: code = NotFound desc = could not find container \"4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50\": container with ID starting with 4e613b422e1b2814a0b02a082b9f21b195b866be940fcc450cdf276243537d50 not found: ID does not exist" Feb 26 22:08:49 crc kubenswrapper[4910]: I0226 22:08:49.202848 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerStarted","Data":"69d8a54fab77a0ea2858518f2bcb2d1d78c2468f7e211163d2a6dc389868388a"} Feb 26 22:08:49 crc kubenswrapper[4910]: I0226 22:08:49.203241 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerStarted","Data":"0576d1a17a500f65c3a0d674872e4571e8ea2bb50647a40c63179b2763341c76"} Feb 26 22:08:49 crc kubenswrapper[4910]: I0226 22:08:49.203280 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerStarted","Data":"6e9f13053fcdf5dccc35928e21b029b9c859bb4f87271f5e8e273bdd5e3c481f"} Feb 26 22:08:49 crc kubenswrapper[4910]: I0226 22:08:49.203290 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerStarted","Data":"b8b73858ef9a74e4c90fbafcfa80db4d4423f2e27f28aba9a0395cb96dfdb18c"} Feb 26 22:08:49 crc kubenswrapper[4910]: I0226 22:08:49.203298 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerStarted","Data":"e1563d2817e2661168ff6210e9ff0a9f60e3a980c9ad8f72218f4b5fdf25892b"} Feb 26 22:08:49 crc kubenswrapper[4910]: I0226 22:08:49.203306 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerStarted","Data":"05c422c0060be518e4d1bc7f668ac23568a4aa3e4df5aefc253f7f756ae17efd"} Feb 26 22:08:49 crc kubenswrapper[4910]: I0226 22:08:49.907294 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41cb54c7-260b-42d4-8ae9-cf2a195721be" path="/var/lib/kubelet/pods/41cb54c7-260b-42d4-8ae9-cf2a195721be/volumes" Feb 26 22:08:52 crc kubenswrapper[4910]: I0226 22:08:52.227122 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerStarted","Data":"7087a6e232c0005c9b41b9a841570792157ca8d6b6757b66771bd89f5f52aaae"} Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.118141 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v"] Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.118971 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.121076 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-d8pps" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.121178 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.121561 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.246804 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf"] Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.247915 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.250542 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.250769 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-xcs2m" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.269470 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx"] Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.270470 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.278495 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e29777de-7955-4e02-88fd-51c42f732421-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf\" (UID: \"e29777de-7955-4e02-88fd-51c42f732421\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.278860 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e29777de-7955-4e02-88fd-51c42f732421-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf\" (UID: \"e29777de-7955-4e02-88fd-51c42f732421\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.279021 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/451b4f8d-8570-479f-bb0e-ddbb695bf345-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx\" (UID: \"451b4f8d-8570-479f-bb0e-ddbb695bf345\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.279184 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj4dt\" (UniqueName: \"kubernetes.io/projected/0bbb4449-bb9e-4d59-9b01-10b3180055c0-kube-api-access-zj4dt\") pod \"obo-prometheus-operator-68bc856cb9-qxq9v\" (UID: \"0bbb4449-bb9e-4d59-9b01-10b3180055c0\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.279311 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/451b4f8d-8570-479f-bb0e-ddbb695bf345-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx\" (UID: \"451b4f8d-8570-479f-bb0e-ddbb695bf345\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.380177 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj4dt\" (UniqueName: \"kubernetes.io/projected/0bbb4449-bb9e-4d59-9b01-10b3180055c0-kube-api-access-zj4dt\") pod \"obo-prometheus-operator-68bc856cb9-qxq9v\" (UID: \"0bbb4449-bb9e-4d59-9b01-10b3180055c0\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.380240 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/451b4f8d-8570-479f-bb0e-ddbb695bf345-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx\" (UID: \"451b4f8d-8570-479f-bb0e-ddbb695bf345\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.380298 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e29777de-7955-4e02-88fd-51c42f732421-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf\" (UID: \"e29777de-7955-4e02-88fd-51c42f732421\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.380337 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e29777de-7955-4e02-88fd-51c42f732421-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf\" (UID: \"e29777de-7955-4e02-88fd-51c42f732421\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.380363 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/451b4f8d-8570-479f-bb0e-ddbb695bf345-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx\" (UID: \"451b4f8d-8570-479f-bb0e-ddbb695bf345\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.384657 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/451b4f8d-8570-479f-bb0e-ddbb695bf345-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx\" (UID: \"451b4f8d-8570-479f-bb0e-ddbb695bf345\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.385665 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e29777de-7955-4e02-88fd-51c42f732421-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf\" (UID: \"e29777de-7955-4e02-88fd-51c42f732421\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.386445 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/451b4f8d-8570-479f-bb0e-ddbb695bf345-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx\" (UID: \"451b4f8d-8570-479f-bb0e-ddbb695bf345\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.387486 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e29777de-7955-4e02-88fd-51c42f732421-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf\" (UID: \"e29777de-7955-4e02-88fd-51c42f732421\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.398652 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj4dt\" (UniqueName: \"kubernetes.io/projected/0bbb4449-bb9e-4d59-9b01-10b3180055c0-kube-api-access-zj4dt\") pod \"obo-prometheus-operator-68bc856cb9-qxq9v\" (UID: \"0bbb4449-bb9e-4d59-9b01-10b3180055c0\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.435102 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.457370 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(15a7eaf42f9d75efbcf62c832c7708e3ce2a5f6bff64bac413c31f7ef83fdc05): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.457437 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(15a7eaf42f9d75efbcf62c832c7708e3ce2a5f6bff64bac413c31f7ef83fdc05): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.457456 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(15a7eaf42f9d75efbcf62c832c7708e3ce2a5f6bff64bac413c31f7ef83fdc05): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.457495 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators(0bbb4449-bb9e-4d59-9b01-10b3180055c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators(0bbb4449-bb9e-4d59-9b01-10b3180055c0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(15a7eaf42f9d75efbcf62c832c7708e3ce2a5f6bff64bac413c31f7ef83fdc05): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" podUID="0bbb4449-bb9e-4d59-9b01-10b3180055c0" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.459703 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-z5vhp"] Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.460550 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.462329 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-5bbhg" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.462618 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.481423 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/99ab363a-bae9-4e7d-9b11-668cbde4a8d3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-z5vhp\" (UID: \"99ab363a-bae9-4e7d-9b11-668cbde4a8d3\") " pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.481487 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp8sf\" (UniqueName: \"kubernetes.io/projected/99ab363a-bae9-4e7d-9b11-668cbde4a8d3-kube-api-access-tp8sf\") pod \"observability-operator-59bdc8b94-z5vhp\" (UID: \"99ab363a-bae9-4e7d-9b11-668cbde4a8d3\") " pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.555313 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gv5qh"] Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.556393 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.558445 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-4pc64" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.563503 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.582268 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvqmt\" (UniqueName: \"kubernetes.io/projected/c4e1f736-965a-4540-b006-4138cb8f08ad-kube-api-access-cvqmt\") pod \"perses-operator-5bf474d74f-gv5qh\" (UID: \"c4e1f736-965a-4540-b006-4138cb8f08ad\") " pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.582346 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/99ab363a-bae9-4e7d-9b11-668cbde4a8d3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-z5vhp\" (UID: \"99ab363a-bae9-4e7d-9b11-668cbde4a8d3\") " pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.582372 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4e1f736-965a-4540-b006-4138cb8f08ad-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gv5qh\" (UID: \"c4e1f736-965a-4540-b006-4138cb8f08ad\") " pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.582405 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp8sf\" (UniqueName: \"kubernetes.io/projected/99ab363a-bae9-4e7d-9b11-668cbde4a8d3-kube-api-access-tp8sf\") pod \"observability-operator-59bdc8b94-z5vhp\" (UID: \"99ab363a-bae9-4e7d-9b11-668cbde4a8d3\") " pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.588326 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.589824 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/99ab363a-bae9-4e7d-9b11-668cbde4a8d3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-z5vhp\" (UID: \"99ab363a-bae9-4e7d-9b11-668cbde4a8d3\") " pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.600297 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(bf4044f561c77f0250c6367d029e282a79d382043ffcaf313a12838eca34bd7f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.600383 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(bf4044f561c77f0250c6367d029e282a79d382043ffcaf313a12838eca34bd7f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.600410 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(bf4044f561c77f0250c6367d029e282a79d382043ffcaf313a12838eca34bd7f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.600468 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators(e29777de-7955-4e02-88fd-51c42f732421)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators(e29777de-7955-4e02-88fd-51c42f732421)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(bf4044f561c77f0250c6367d029e282a79d382043ffcaf313a12838eca34bd7f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" podUID="e29777de-7955-4e02-88fd-51c42f732421" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.600790 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp8sf\" (UniqueName: \"kubernetes.io/projected/99ab363a-bae9-4e7d-9b11-668cbde4a8d3-kube-api-access-tp8sf\") pod \"observability-operator-59bdc8b94-z5vhp\" (UID: \"99ab363a-bae9-4e7d-9b11-668cbde4a8d3\") " pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.618414 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(ed3f6088bce7de8adf9739b2d2ad18b26a505c6312ef4a4bb16e698875229c2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.618536 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(ed3f6088bce7de8adf9739b2d2ad18b26a505c6312ef4a4bb16e698875229c2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.618609 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(ed3f6088bce7de8adf9739b2d2ad18b26a505c6312ef4a4bb16e698875229c2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.618696 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators(451b4f8d-8570-479f-bb0e-ddbb695bf345)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators(451b4f8d-8570-479f-bb0e-ddbb695bf345)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(ed3f6088bce7de8adf9739b2d2ad18b26a505c6312ef4a4bb16e698875229c2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" podUID="451b4f8d-8570-479f-bb0e-ddbb695bf345" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.683232 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvqmt\" (UniqueName: \"kubernetes.io/projected/c4e1f736-965a-4540-b006-4138cb8f08ad-kube-api-access-cvqmt\") pod \"perses-operator-5bf474d74f-gv5qh\" (UID: \"c4e1f736-965a-4540-b006-4138cb8f08ad\") " pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.683482 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4e1f736-965a-4540-b006-4138cb8f08ad-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gv5qh\" (UID: \"c4e1f736-965a-4540-b006-4138cb8f08ad\") " pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.684481 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4e1f736-965a-4540-b006-4138cb8f08ad-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gv5qh\" (UID: \"c4e1f736-965a-4540-b006-4138cb8f08ad\") " pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.703112 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvqmt\" (UniqueName: \"kubernetes.io/projected/c4e1f736-965a-4540-b006-4138cb8f08ad-kube-api-access-cvqmt\") pod \"perses-operator-5bf474d74f-gv5qh\" (UID: \"c4e1f736-965a-4540-b006-4138cb8f08ad\") " pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.778699 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.820040 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(3d61eff08a99431ccd1c266097d637708e428d41aa712c83e3dd65b56c8d7440): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.820104 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(3d61eff08a99431ccd1c266097d637708e428d41aa712c83e3dd65b56c8d7440): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.820126 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(3d61eff08a99431ccd1c266097d637708e428d41aa712c83e3dd65b56c8d7440): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.820183 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-z5vhp_openshift-operators(99ab363a-bae9-4e7d-9b11-668cbde4a8d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-z5vhp_openshift-operators(99ab363a-bae9-4e7d-9b11-668cbde4a8d3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(3d61eff08a99431ccd1c266097d637708e428d41aa712c83e3dd65b56c8d7440): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" podUID="99ab363a-bae9-4e7d-9b11-668cbde4a8d3" Feb 26 22:08:53 crc kubenswrapper[4910]: I0226 22:08:53.868891 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.894311 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(c758f377b6686d519097784edb1f8d6f9842274992550a6d358ed5508926bec5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.894370 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(c758f377b6686d519097784edb1f8d6f9842274992550a6d358ed5508926bec5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.894390 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(c758f377b6686d519097784edb1f8d6f9842274992550a6d358ed5508926bec5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:53 crc kubenswrapper[4910]: E0226 22:08:53.894433 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-gv5qh_openshift-operators(c4e1f736-965a-4540-b006-4138cb8f08ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-gv5qh_openshift-operators(c4e1f736-965a-4540-b006-4138cb8f08ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(c758f377b6686d519097784edb1f8d6f9842274992550a6d358ed5508926bec5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" podUID="c4e1f736-965a-4540-b006-4138cb8f08ad" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.246120 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" event={"ID":"9fa77f99-ce5c-435e-aa8b-c872d5ddcdd8","Type":"ContainerStarted","Data":"0fb41dd57aa7ff84efa049688225ab0da255f4a6e0164bc31f405a9b97d29be1"} Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.246684 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.246860 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.246882 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.278032 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.290895 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" podStartSLOduration=7.290876451 podStartE2EDuration="7.290876451s" podCreationTimestamp="2026-02-26 22:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:08:54.285585637 +0000 UTC m=+819.365076188" watchObservedRunningTime="2026-02-26 22:08:54.290876451 +0000 UTC m=+819.370367012" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.293186 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.412909 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx"] Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.413035 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.413511 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.428384 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-z5vhp"] Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.428488 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.428931 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.437493 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v"] Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.437606 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.438028 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.460180 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(1df0819a3261b273318c78a660eea88182ca6ca778097a602667c962158d1f59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.460738 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(1df0819a3261b273318c78a660eea88182ca6ca778097a602667c962158d1f59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.460910 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(1df0819a3261b273318c78a660eea88182ca6ca778097a602667c962158d1f59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.461009 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators(451b4f8d-8570-479f-bb0e-ddbb695bf345)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators(451b4f8d-8570-479f-bb0e-ddbb695bf345)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(1df0819a3261b273318c78a660eea88182ca6ca778097a602667c962158d1f59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" podUID="451b4f8d-8570-479f-bb0e-ddbb695bf345" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.475275 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(60b06988fb901d1f5ac6e429aa846c16c4df33ac379f441369fb1548cd41b3b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.476247 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(60b06988fb901d1f5ac6e429aa846c16c4df33ac379f441369fb1548cd41b3b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.476330 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(60b06988fb901d1f5ac6e429aa846c16c4df33ac379f441369fb1548cd41b3b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.476437 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-z5vhp_openshift-operators(99ab363a-bae9-4e7d-9b11-668cbde4a8d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-z5vhp_openshift-operators(99ab363a-bae9-4e7d-9b11-668cbde4a8d3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(60b06988fb901d1f5ac6e429aa846c16c4df33ac379f441369fb1548cd41b3b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" podUID="99ab363a-bae9-4e7d-9b11-668cbde4a8d3" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.478497 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gv5qh"] Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.478624 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.479047 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.484237 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf"] Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.484455 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:54 crc kubenswrapper[4910]: I0226 22:08:54.484907 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.490736 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(1b9e19a71e2c112867d65d9079c1731193f400d03e195adb26151418b236d510): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.491046 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(1b9e19a71e2c112867d65d9079c1731193f400d03e195adb26151418b236d510): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.491070 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(1b9e19a71e2c112867d65d9079c1731193f400d03e195adb26151418b236d510): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.491117 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators(0bbb4449-bb9e-4d59-9b01-10b3180055c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators(0bbb4449-bb9e-4d59-9b01-10b3180055c0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(1b9e19a71e2c112867d65d9079c1731193f400d03e195adb26151418b236d510): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" podUID="0bbb4449-bb9e-4d59-9b01-10b3180055c0" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.513370 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(cbbefa4d5f8bf6e32a43498ba80f6202154cfce29604baa2edcc689a1ac78f12): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.513422 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(cbbefa4d5f8bf6e32a43498ba80f6202154cfce29604baa2edcc689a1ac78f12): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.513442 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(cbbefa4d5f8bf6e32a43498ba80f6202154cfce29604baa2edcc689a1ac78f12): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.513479 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-gv5qh_openshift-operators(c4e1f736-965a-4540-b006-4138cb8f08ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-gv5qh_openshift-operators(c4e1f736-965a-4540-b006-4138cb8f08ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(cbbefa4d5f8bf6e32a43498ba80f6202154cfce29604baa2edcc689a1ac78f12): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" podUID="c4e1f736-965a-4540-b006-4138cb8f08ad" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.519663 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(434d7054f640726bdc8e9728ddcff4ad11a885033b5406bda0eb76b8bbc973b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.519703 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(434d7054f640726bdc8e9728ddcff4ad11a885033b5406bda0eb76b8bbc973b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.519720 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(434d7054f640726bdc8e9728ddcff4ad11a885033b5406bda0eb76b8bbc973b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:08:54 crc kubenswrapper[4910]: E0226 22:08:54.519752 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators(e29777de-7955-4e02-88fd-51c42f732421)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators(e29777de-7955-4e02-88fd-51c42f732421)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(434d7054f640726bdc8e9728ddcff4ad11a885033b5406bda0eb76b8bbc973b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" podUID="e29777de-7955-4e02-88fd-51c42f732421" Feb 26 22:08:57 crc kubenswrapper[4910]: I0226 22:08:57.901404 4910 scope.go:117] "RemoveContainer" containerID="0206f2babef31f4c9359fa5e49447fa3c2c463f5dfd690dac95da1a45bea19e3" Feb 26 22:08:57 crc kubenswrapper[4910]: E0226 22:08:57.901868 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-795gt_openshift-multus(d78660ec-f27f-43be-add6-8fab38329537)\"" pod="openshift-multus/multus-795gt" podUID="d78660ec-f27f-43be-add6-8fab38329537" Feb 26 22:09:06 crc kubenswrapper[4910]: I0226 22:09:06.900461 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:09:06 crc kubenswrapper[4910]: I0226 22:09:06.901436 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:09:06 crc kubenswrapper[4910]: E0226 22:09:06.924734 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(6acd3699e583935de0dc63b93279879a1bbc663b442a172eec466fab19adf923): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:09:06 crc kubenswrapper[4910]: E0226 22:09:06.924801 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(6acd3699e583935de0dc63b93279879a1bbc663b442a172eec466fab19adf923): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:09:06 crc kubenswrapper[4910]: E0226 22:09:06.924822 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(6acd3699e583935de0dc63b93279879a1bbc663b442a172eec466fab19adf923): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:09:06 crc kubenswrapper[4910]: E0226 22:09:06.924875 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators(451b4f8d-8570-479f-bb0e-ddbb695bf345)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators(451b4f8d-8570-479f-bb0e-ddbb695bf345)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_openshift-operators_451b4f8d-8570-479f-bb0e-ddbb695bf345_0(6acd3699e583935de0dc63b93279879a1bbc663b442a172eec466fab19adf923): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" podUID="451b4f8d-8570-479f-bb0e-ddbb695bf345" Feb 26 22:09:08 crc kubenswrapper[4910]: I0226 22:09:08.900807 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:09:08 crc kubenswrapper[4910]: I0226 22:09:08.900835 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:09:08 crc kubenswrapper[4910]: I0226 22:09:08.902967 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:09:08 crc kubenswrapper[4910]: I0226 22:09:08.903126 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:09:08 crc kubenswrapper[4910]: E0226 22:09:08.955397 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(5a50381bc61686e1461148d264f3cbc1ce66518118306d7335d45df62a7e11da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:09:08 crc kubenswrapper[4910]: E0226 22:09:08.955482 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(5a50381bc61686e1461148d264f3cbc1ce66518118306d7335d45df62a7e11da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:09:08 crc kubenswrapper[4910]: E0226 22:09:08.955515 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(5a50381bc61686e1461148d264f3cbc1ce66518118306d7335d45df62a7e11da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:09:08 crc kubenswrapper[4910]: E0226 22:09:08.955590 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators(0bbb4449-bb9e-4d59-9b01-10b3180055c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators(0bbb4449-bb9e-4d59-9b01-10b3180055c0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qxq9v_openshift-operators_0bbb4449-bb9e-4d59-9b01-10b3180055c0_0(5a50381bc61686e1461148d264f3cbc1ce66518118306d7335d45df62a7e11da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" podUID="0bbb4449-bb9e-4d59-9b01-10b3180055c0" Feb 26 22:09:08 crc kubenswrapper[4910]: E0226 22:09:08.965590 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(7667218a53179cf273ced553096f2018a41efa8a59807f8749e8fadf6b440ab7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:09:08 crc kubenswrapper[4910]: E0226 22:09:08.965676 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(7667218a53179cf273ced553096f2018a41efa8a59807f8749e8fadf6b440ab7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:09:08 crc kubenswrapper[4910]: E0226 22:09:08.965720 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(7667218a53179cf273ced553096f2018a41efa8a59807f8749e8fadf6b440ab7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:09:08 crc kubenswrapper[4910]: E0226 22:09:08.965798 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-z5vhp_openshift-operators(99ab363a-bae9-4e7d-9b11-668cbde4a8d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-z5vhp_openshift-operators(99ab363a-bae9-4e7d-9b11-668cbde4a8d3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-z5vhp_openshift-operators_99ab363a-bae9-4e7d-9b11-668cbde4a8d3_0(7667218a53179cf273ced553096f2018a41efa8a59807f8749e8fadf6b440ab7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" podUID="99ab363a-bae9-4e7d-9b11-668cbde4a8d3" Feb 26 22:09:09 crc kubenswrapper[4910]: I0226 22:09:09.901545 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:09:09 crc kubenswrapper[4910]: I0226 22:09:09.901565 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:09:09 crc kubenswrapper[4910]: I0226 22:09:09.901903 4910 scope.go:117] "RemoveContainer" containerID="0206f2babef31f4c9359fa5e49447fa3c2c463f5dfd690dac95da1a45bea19e3" Feb 26 22:09:09 crc kubenswrapper[4910]: I0226 22:09:09.902437 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:09:09 crc kubenswrapper[4910]: I0226 22:09:09.902607 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:09:09 crc kubenswrapper[4910]: E0226 22:09:09.941137 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(04da9e9b8fa0dffc45e5c1a9eed0a02d65e994ad753d99754daf10b5ea4c219b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:09:09 crc kubenswrapper[4910]: E0226 22:09:09.941223 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(04da9e9b8fa0dffc45e5c1a9eed0a02d65e994ad753d99754daf10b5ea4c219b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:09:09 crc kubenswrapper[4910]: E0226 22:09:09.941253 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(04da9e9b8fa0dffc45e5c1a9eed0a02d65e994ad753d99754daf10b5ea4c219b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:09:09 crc kubenswrapper[4910]: E0226 22:09:09.941313 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators(e29777de-7955-4e02-88fd-51c42f732421)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators(e29777de-7955-4e02-88fd-51c42f732421)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_openshift-operators_e29777de-7955-4e02-88fd-51c42f732421_0(04da9e9b8fa0dffc45e5c1a9eed0a02d65e994ad753d99754daf10b5ea4c219b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" podUID="e29777de-7955-4e02-88fd-51c42f732421" Feb 26 22:09:09 crc kubenswrapper[4910]: E0226 22:09:09.947049 4910 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(77140121775f18d9c2854d038fee5803d61d39324c59cbd617b73ac2629aabcb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 22:09:09 crc kubenswrapper[4910]: E0226 22:09:09.947096 4910 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(77140121775f18d9c2854d038fee5803d61d39324c59cbd617b73ac2629aabcb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:09:09 crc kubenswrapper[4910]: E0226 22:09:09.947119 4910 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(77140121775f18d9c2854d038fee5803d61d39324c59cbd617b73ac2629aabcb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:09:09 crc kubenswrapper[4910]: E0226 22:09:09.947180 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-gv5qh_openshift-operators(c4e1f736-965a-4540-b006-4138cb8f08ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-gv5qh_openshift-operators(c4e1f736-965a-4540-b006-4138cb8f08ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gv5qh_openshift-operators_c4e1f736-965a-4540-b006-4138cb8f08ad_0(77140121775f18d9c2854d038fee5803d61d39324c59cbd617b73ac2629aabcb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" podUID="c4e1f736-965a-4540-b006-4138cb8f08ad" Feb 26 22:09:10 crc kubenswrapper[4910]: I0226 22:09:10.345990 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-795gt_d78660ec-f27f-43be-add6-8fab38329537/kube-multus/2.log" Feb 26 22:09:10 crc kubenswrapper[4910]: I0226 22:09:10.346415 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-795gt" event={"ID":"d78660ec-f27f-43be-add6-8fab38329537","Type":"ContainerStarted","Data":"55d66d420b645e43fa03cbbebe6b4db4da5240e64f2219c46c762aee025211a6"} Feb 26 22:09:17 crc kubenswrapper[4910]: I0226 22:09:17.708428 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5mvh9" Feb 26 22:09:20 crc kubenswrapper[4910]: I0226 22:09:20.901373 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:09:20 crc kubenswrapper[4910]: I0226 22:09:20.901469 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:09:20 crc kubenswrapper[4910]: I0226 22:09:20.902084 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:09:20 crc kubenswrapper[4910]: I0226 22:09:20.902234 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" Feb 26 22:09:21 crc kubenswrapper[4910]: I0226 22:09:21.335341 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-z5vhp"] Feb 26 22:09:21 crc kubenswrapper[4910]: W0226 22:09:21.340223 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99ab363a_bae9_4e7d_9b11_668cbde4a8d3.slice/crio-18027d1ee2e9846379a85ea694bf3f322440bead69cbec2047437be56f03872c WatchSource:0}: Error finding container 18027d1ee2e9846379a85ea694bf3f322440bead69cbec2047437be56f03872c: Status 404 returned error can't find the container with id 18027d1ee2e9846379a85ea694bf3f322440bead69cbec2047437be56f03872c Feb 26 22:09:21 crc kubenswrapper[4910]: I0226 22:09:21.405536 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" event={"ID":"99ab363a-bae9-4e7d-9b11-668cbde4a8d3","Type":"ContainerStarted","Data":"18027d1ee2e9846379a85ea694bf3f322440bead69cbec2047437be56f03872c"} Feb 26 22:09:21 crc kubenswrapper[4910]: I0226 22:09:21.465738 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx"] Feb 26 22:09:21 crc kubenswrapper[4910]: W0226 22:09:21.468619 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod451b4f8d_8570_479f_bb0e_ddbb695bf345.slice/crio-6540dda0402f6acf3a210c63ed7a06a7fcfe89b311cc221ce63532e45e706cb5 WatchSource:0}: Error finding container 6540dda0402f6acf3a210c63ed7a06a7fcfe89b311cc221ce63532e45e706cb5: Status 404 returned error can't find the container with id 6540dda0402f6acf3a210c63ed7a06a7fcfe89b311cc221ce63532e45e706cb5 Feb 26 22:09:21 crc kubenswrapper[4910]: I0226 22:09:21.900880 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:09:21 crc kubenswrapper[4910]: I0226 22:09:21.900896 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:09:21 crc kubenswrapper[4910]: I0226 22:09:21.901480 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" Feb 26 22:09:21 crc kubenswrapper[4910]: I0226 22:09:21.901649 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" Feb 26 22:09:22 crc kubenswrapper[4910]: I0226 22:09:22.362313 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v"] Feb 26 22:09:22 crc kubenswrapper[4910]: I0226 22:09:22.411791 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" event={"ID":"451b4f8d-8570-479f-bb0e-ddbb695bf345","Type":"ContainerStarted","Data":"6540dda0402f6acf3a210c63ed7a06a7fcfe89b311cc221ce63532e45e706cb5"} Feb 26 22:09:22 crc kubenswrapper[4910]: I0226 22:09:22.413051 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" event={"ID":"0bbb4449-bb9e-4d59-9b01-10b3180055c0","Type":"ContainerStarted","Data":"8e11af6aba748bb36b5ab27f72757e81749a78b38e2cbad0ec56b19be3aa55d1"} Feb 26 22:09:22 crc kubenswrapper[4910]: I0226 22:09:22.464150 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf"] Feb 26 22:09:22 crc kubenswrapper[4910]: W0226 22:09:22.469585 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode29777de_7955_4e02_88fd_51c42f732421.slice/crio-7544744a670b31777a92737df07199810978e34adf6e187c692bb33b7f9b36dd WatchSource:0}: Error finding container 7544744a670b31777a92737df07199810978e34adf6e187c692bb33b7f9b36dd: Status 404 returned error can't find the container with id 7544744a670b31777a92737df07199810978e34adf6e187c692bb33b7f9b36dd Feb 26 22:09:23 crc kubenswrapper[4910]: I0226 22:09:23.419774 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" event={"ID":"e29777de-7955-4e02-88fd-51c42f732421","Type":"ContainerStarted","Data":"7544744a670b31777a92737df07199810978e34adf6e187c692bb33b7f9b36dd"} Feb 26 22:09:24 crc kubenswrapper[4910]: I0226 22:09:24.901031 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:09:24 crc kubenswrapper[4910]: I0226 22:09:24.901922 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.451181 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" event={"ID":"99ab363a-bae9-4e7d-9b11-668cbde4a8d3","Type":"ContainerStarted","Data":"38e839b43ac4f61e471a3c2948e33c24f7407bf6d08279bef6336684684c3e0f"} Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.451877 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.452636 4910 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-z5vhp container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.13:8081/healthz\": dial tcp 10.217.0.13:8081: connect: connection refused" start-of-body= Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.452686 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" podUID="99ab363a-bae9-4e7d-9b11-668cbde4a8d3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.13:8081/healthz\": dial tcp 10.217.0.13:8081: connect: connection refused" Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.453849 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" event={"ID":"e29777de-7955-4e02-88fd-51c42f732421","Type":"ContainerStarted","Data":"3f6761340a8d06c9e129878f7b4cba772e2f34bab6666e8a6842834181aeffad"} Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.456358 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" event={"ID":"0bbb4449-bb9e-4d59-9b01-10b3180055c0","Type":"ContainerStarted","Data":"4af1dab84f3c1ac7c2d8d77b0b7eb85cd342575c4e12abaeaa9ac98e228db878"} Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.458360 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" event={"ID":"451b4f8d-8570-479f-bb0e-ddbb695bf345","Type":"ContainerStarted","Data":"d81296e4b9020ce0c4204a5b1d38ee58d03dac9e1a8de195c1350ab7463a7c54"} Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.477706 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" podStartSLOduration=28.625807848 podStartE2EDuration="35.477690768s" podCreationTimestamp="2026-02-26 22:08:53 +0000 UTC" firstStartedPulling="2026-02-26 22:09:21.343036173 +0000 UTC m=+846.422526704" lastFinishedPulling="2026-02-26 22:09:28.194919083 +0000 UTC m=+853.274409624" observedRunningTime="2026-02-26 22:09:28.476064344 +0000 UTC m=+853.555554895" watchObservedRunningTime="2026-02-26 22:09:28.477690768 +0000 UTC m=+853.557181319" Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.545070 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf" podStartSLOduration=29.862896185 podStartE2EDuration="35.545053451s" podCreationTimestamp="2026-02-26 22:08:53 +0000 UTC" firstStartedPulling="2026-02-26 22:09:22.472869417 +0000 UTC m=+847.552359958" lastFinishedPulling="2026-02-26 22:09:28.155026683 +0000 UTC m=+853.234517224" observedRunningTime="2026-02-26 22:09:28.513899188 +0000 UTC m=+853.593389739" watchObservedRunningTime="2026-02-26 22:09:28.545053451 +0000 UTC m=+853.624543992" Feb 26 22:09:28 crc kubenswrapper[4910]: W0226 22:09:28.545693 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4e1f736_965a_4540_b006_4138cb8f08ad.slice/crio-b72b6875d60f82005367d8e9c1d3e8b9320d34177d33cab9e7719e54bc40febb WatchSource:0}: Error finding container b72b6875d60f82005367d8e9c1d3e8b9320d34177d33cab9e7719e54bc40febb: Status 404 returned error can't find the container with id b72b6875d60f82005367d8e9c1d3e8b9320d34177d33cab9e7719e54bc40febb Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.548331 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gv5qh"] Feb 26 22:09:28 crc kubenswrapper[4910]: I0226 22:09:28.550074 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qxq9v" podStartSLOduration=29.767726058 podStartE2EDuration="35.550058736s" podCreationTimestamp="2026-02-26 22:08:53 +0000 UTC" firstStartedPulling="2026-02-26 22:09:22.373802665 +0000 UTC m=+847.453293226" lastFinishedPulling="2026-02-26 22:09:28.156135363 +0000 UTC m=+853.235625904" observedRunningTime="2026-02-26 22:09:28.540381455 +0000 UTC m=+853.619872016" watchObservedRunningTime="2026-02-26 22:09:28.550058736 +0000 UTC m=+853.629549277" Feb 26 22:09:29 crc kubenswrapper[4910]: I0226 22:09:29.466116 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" event={"ID":"c4e1f736-965a-4540-b006-4138cb8f08ad","Type":"ContainerStarted","Data":"b72b6875d60f82005367d8e9c1d3e8b9320d34177d33cab9e7719e54bc40febb"} Feb 26 22:09:29 crc kubenswrapper[4910]: I0226 22:09:29.468513 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-z5vhp" Feb 26 22:09:29 crc kubenswrapper[4910]: I0226 22:09:29.492088 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx" podStartSLOduration=29.83319001 podStartE2EDuration="36.492067356s" podCreationTimestamp="2026-02-26 22:08:53 +0000 UTC" firstStartedPulling="2026-02-26 22:09:21.470804381 +0000 UTC m=+846.550294922" lastFinishedPulling="2026-02-26 22:09:28.129681707 +0000 UTC m=+853.209172268" observedRunningTime="2026-02-26 22:09:28.568740362 +0000 UTC m=+853.648230923" watchObservedRunningTime="2026-02-26 22:09:29.492067356 +0000 UTC m=+854.571557907" Feb 26 22:09:31 crc kubenswrapper[4910]: I0226 22:09:31.480662 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" event={"ID":"c4e1f736-965a-4540-b006-4138cb8f08ad","Type":"ContainerStarted","Data":"8b03660ec0d4d5fc4441767311f22dc4578177868bc7ddfb122e7b369423bf9e"} Feb 26 22:09:31 crc kubenswrapper[4910]: I0226 22:09:31.481063 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:09:34 crc kubenswrapper[4910]: I0226 22:09:34.746380 4910 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.281440 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" podStartSLOduration=43.677787033 podStartE2EDuration="46.281422213s" podCreationTimestamp="2026-02-26 22:08:53 +0000 UTC" firstStartedPulling="2026-02-26 22:09:28.550485888 +0000 UTC m=+853.629976439" lastFinishedPulling="2026-02-26 22:09:31.154121068 +0000 UTC m=+856.233611619" observedRunningTime="2026-02-26 22:09:31.519645763 +0000 UTC m=+856.599136304" watchObservedRunningTime="2026-02-26 22:09:39.281422213 +0000 UTC m=+864.360912764" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.285402 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc"] Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.286153 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.293911 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.294345 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-p8rdf"] Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.294086 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.294190 4910 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-gbfhj" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.295097 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-p8rdf" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.303202 4910 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6r8d4" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.311949 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc"] Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.326613 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-p8rdf"] Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.335378 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-s5lhf"] Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.336248 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.338338 4910 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mv6h9" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.375528 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-s5lhf"] Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.375560 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g46lk\" (UniqueName: \"kubernetes.io/projected/d9ce7039-0688-4b29-9484-5066034d4d02-kube-api-access-g46lk\") pod \"cert-manager-cainjector-cf98fcc89-xnwqc\" (UID: \"d9ce7039-0688-4b29-9484-5066034d4d02\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.476781 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fbzw\" (UniqueName: \"kubernetes.io/projected/be6cb3a9-88a5-49cc-9843-782d9641b5fe-kube-api-access-9fbzw\") pod \"cert-manager-858654f9db-p8rdf\" (UID: \"be6cb3a9-88a5-49cc-9843-782d9641b5fe\") " pod="cert-manager/cert-manager-858654f9db-p8rdf" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.476854 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g46lk\" (UniqueName: \"kubernetes.io/projected/d9ce7039-0688-4b29-9484-5066034d4d02-kube-api-access-g46lk\") pod \"cert-manager-cainjector-cf98fcc89-xnwqc\" (UID: \"d9ce7039-0688-4b29-9484-5066034d4d02\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.476871 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggqb5\" (UniqueName: \"kubernetes.io/projected/cc433bc4-06d0-4126-8d67-f8caaf596d86-kube-api-access-ggqb5\") pod \"cert-manager-webhook-687f57d79b-s5lhf\" (UID: \"cc433bc4-06d0-4126-8d67-f8caaf596d86\") " pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.502682 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g46lk\" (UniqueName: \"kubernetes.io/projected/d9ce7039-0688-4b29-9484-5066034d4d02-kube-api-access-g46lk\") pod \"cert-manager-cainjector-cf98fcc89-xnwqc\" (UID: \"d9ce7039-0688-4b29-9484-5066034d4d02\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.578669 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggqb5\" (UniqueName: \"kubernetes.io/projected/cc433bc4-06d0-4126-8d67-f8caaf596d86-kube-api-access-ggqb5\") pod \"cert-manager-webhook-687f57d79b-s5lhf\" (UID: \"cc433bc4-06d0-4126-8d67-f8caaf596d86\") " pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.578831 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fbzw\" (UniqueName: \"kubernetes.io/projected/be6cb3a9-88a5-49cc-9843-782d9641b5fe-kube-api-access-9fbzw\") pod \"cert-manager-858654f9db-p8rdf\" (UID: \"be6cb3a9-88a5-49cc-9843-782d9641b5fe\") " pod="cert-manager/cert-manager-858654f9db-p8rdf" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.607822 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fbzw\" (UniqueName: \"kubernetes.io/projected/be6cb3a9-88a5-49cc-9843-782d9641b5fe-kube-api-access-9fbzw\") pod \"cert-manager-858654f9db-p8rdf\" (UID: \"be6cb3a9-88a5-49cc-9843-782d9641b5fe\") " pod="cert-manager/cert-manager-858654f9db-p8rdf" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.608987 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.609292 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggqb5\" (UniqueName: \"kubernetes.io/projected/cc433bc4-06d0-4126-8d67-f8caaf596d86-kube-api-access-ggqb5\") pod \"cert-manager-webhook-687f57d79b-s5lhf\" (UID: \"cc433bc4-06d0-4126-8d67-f8caaf596d86\") " pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.621628 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-p8rdf" Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.655245 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" Feb 26 22:09:39 crc kubenswrapper[4910]: W0226 22:09:39.976299 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc433bc4_06d0_4126_8d67_f8caaf596d86.slice/crio-506caeed480c70f38717270dd62cfef39c5ef6f28f9706c02aa4261bbd413c0c WatchSource:0}: Error finding container 506caeed480c70f38717270dd62cfef39c5ef6f28f9706c02aa4261bbd413c0c: Status 404 returned error can't find the container with id 506caeed480c70f38717270dd62cfef39c5ef6f28f9706c02aa4261bbd413c0c Feb 26 22:09:39 crc kubenswrapper[4910]: I0226 22:09:39.979048 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-s5lhf"] Feb 26 22:09:40 crc kubenswrapper[4910]: I0226 22:09:40.069933 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-p8rdf"] Feb 26 22:09:40 crc kubenswrapper[4910]: W0226 22:09:40.074247 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe6cb3a9_88a5_49cc_9843_782d9641b5fe.slice/crio-dcc7f919b61283ba45245a8ef0ffb83a98617b707078fd8e58422097335fc54a WatchSource:0}: Error finding container dcc7f919b61283ba45245a8ef0ffb83a98617b707078fd8e58422097335fc54a: Status 404 returned error can't find the container with id dcc7f919b61283ba45245a8ef0ffb83a98617b707078fd8e58422097335fc54a Feb 26 22:09:40 crc kubenswrapper[4910]: I0226 22:09:40.081621 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc"] Feb 26 22:09:40 crc kubenswrapper[4910]: I0226 22:09:40.540897 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" event={"ID":"cc433bc4-06d0-4126-8d67-f8caaf596d86","Type":"ContainerStarted","Data":"506caeed480c70f38717270dd62cfef39c5ef6f28f9706c02aa4261bbd413c0c"} Feb 26 22:09:40 crc kubenswrapper[4910]: I0226 22:09:40.542324 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-p8rdf" event={"ID":"be6cb3a9-88a5-49cc-9843-782d9641b5fe","Type":"ContainerStarted","Data":"dcc7f919b61283ba45245a8ef0ffb83a98617b707078fd8e58422097335fc54a"} Feb 26 22:09:40 crc kubenswrapper[4910]: I0226 22:09:40.545079 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc" event={"ID":"d9ce7039-0688-4b29-9484-5066034d4d02","Type":"ContainerStarted","Data":"bae544d47947b5e11224eef83f78426066de0101f7cb813ec392f5725a3f260a"} Feb 26 22:09:43 crc kubenswrapper[4910]: I0226 22:09:43.873217 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-gv5qh" Feb 26 22:09:44 crc kubenswrapper[4910]: I0226 22:09:44.588428 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" event={"ID":"cc433bc4-06d0-4126-8d67-f8caaf596d86","Type":"ContainerStarted","Data":"272c84e443eab5bf29244c998eea173ec59d88632c2f22daa9d98a3776cfd930"} Feb 26 22:09:44 crc kubenswrapper[4910]: I0226 22:09:44.588583 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" Feb 26 22:09:44 crc kubenswrapper[4910]: I0226 22:09:44.591213 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-p8rdf" event={"ID":"be6cb3a9-88a5-49cc-9843-782d9641b5fe","Type":"ContainerStarted","Data":"63fb07aaa0fa4c0684bf4b27a027ad939a0cf45832d08b79d618f281935888bd"} Feb 26 22:09:44 crc kubenswrapper[4910]: I0226 22:09:44.595211 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc" event={"ID":"d9ce7039-0688-4b29-9484-5066034d4d02","Type":"ContainerStarted","Data":"41e6e469285ab833662d364ebd6683166d01624982d2debd3ca85a1beb63dec9"} Feb 26 22:09:44 crc kubenswrapper[4910]: I0226 22:09:44.615436 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" podStartSLOduration=1.9005368329999999 podStartE2EDuration="5.615414245s" podCreationTimestamp="2026-02-26 22:09:39 +0000 UTC" firstStartedPulling="2026-02-26 22:09:39.978490513 +0000 UTC m=+865.057981074" lastFinishedPulling="2026-02-26 22:09:43.693367935 +0000 UTC m=+868.772858486" observedRunningTime="2026-02-26 22:09:44.609775161 +0000 UTC m=+869.689265702" watchObservedRunningTime="2026-02-26 22:09:44.615414245 +0000 UTC m=+869.694904826" Feb 26 22:09:44 crc kubenswrapper[4910]: I0226 22:09:44.639008 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-p8rdf" podStartSLOduration=2.015106924 podStartE2EDuration="5.638985752s" podCreationTimestamp="2026-02-26 22:09:39 +0000 UTC" firstStartedPulling="2026-02-26 22:09:40.076382443 +0000 UTC m=+865.155872984" lastFinishedPulling="2026-02-26 22:09:43.700261241 +0000 UTC m=+868.779751812" observedRunningTime="2026-02-26 22:09:44.636887456 +0000 UTC m=+869.716378027" watchObservedRunningTime="2026-02-26 22:09:44.638985752 +0000 UTC m=+869.718476303" Feb 26 22:09:44 crc kubenswrapper[4910]: I0226 22:09:44.661846 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnwqc" podStartSLOduration=2.054412479 podStartE2EDuration="5.661816451s" podCreationTimestamp="2026-02-26 22:09:39 +0000 UTC" firstStartedPulling="2026-02-26 22:09:40.082034696 +0000 UTC m=+865.161525237" lastFinishedPulling="2026-02-26 22:09:43.689438628 +0000 UTC m=+868.768929209" observedRunningTime="2026-02-26 22:09:44.656896647 +0000 UTC m=+869.736387228" watchObservedRunningTime="2026-02-26 22:09:44.661816451 +0000 UTC m=+869.741307042" Feb 26 22:09:49 crc kubenswrapper[4910]: I0226 22:09:49.658933 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-s5lhf" Feb 26 22:09:55 crc kubenswrapper[4910]: I0226 22:09:55.727337 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:09:55 crc kubenswrapper[4910]: I0226 22:09:55.727758 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.137715 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535730-p75th"] Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.139103 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535730-p75th" Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.142588 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.144448 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.145319 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.151796 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535730-p75th"] Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.247236 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbpwd\" (UniqueName: \"kubernetes.io/projected/72aeffcb-4b89-42c4-8be7-3ce49742f95d-kube-api-access-tbpwd\") pod \"auto-csr-approver-29535730-p75th\" (UID: \"72aeffcb-4b89-42c4-8be7-3ce49742f95d\") " pod="openshift-infra/auto-csr-approver-29535730-p75th" Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.348830 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbpwd\" (UniqueName: \"kubernetes.io/projected/72aeffcb-4b89-42c4-8be7-3ce49742f95d-kube-api-access-tbpwd\") pod \"auto-csr-approver-29535730-p75th\" (UID: \"72aeffcb-4b89-42c4-8be7-3ce49742f95d\") " pod="openshift-infra/auto-csr-approver-29535730-p75th" Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.379490 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbpwd\" (UniqueName: \"kubernetes.io/projected/72aeffcb-4b89-42c4-8be7-3ce49742f95d-kube-api-access-tbpwd\") pod \"auto-csr-approver-29535730-p75th\" (UID: \"72aeffcb-4b89-42c4-8be7-3ce49742f95d\") " pod="openshift-infra/auto-csr-approver-29535730-p75th" Feb 26 22:10:00 crc kubenswrapper[4910]: I0226 22:10:00.470888 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535730-p75th" Feb 26 22:10:01 crc kubenswrapper[4910]: I0226 22:10:01.003695 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535730-p75th"] Feb 26 22:10:01 crc kubenswrapper[4910]: W0226 22:10:01.007377 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72aeffcb_4b89_42c4_8be7_3ce49742f95d.slice/crio-4d8a9ecd9c107b562dc8265c078d7d17c123e205fbbf6a8f77e3f640be12b7af WatchSource:0}: Error finding container 4d8a9ecd9c107b562dc8265c078d7d17c123e205fbbf6a8f77e3f640be12b7af: Status 404 returned error can't find the container with id 4d8a9ecd9c107b562dc8265c078d7d17c123e205fbbf6a8f77e3f640be12b7af Feb 26 22:10:01 crc kubenswrapper[4910]: I0226 22:10:01.718570 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535730-p75th" event={"ID":"72aeffcb-4b89-42c4-8be7-3ce49742f95d","Type":"ContainerStarted","Data":"4d8a9ecd9c107b562dc8265c078d7d17c123e205fbbf6a8f77e3f640be12b7af"} Feb 26 22:10:03 crc kubenswrapper[4910]: I0226 22:10:03.737845 4910 generic.go:334] "Generic (PLEG): container finished" podID="72aeffcb-4b89-42c4-8be7-3ce49742f95d" containerID="fb116b15d2b2cca3b1a55cde8eb00390b3eb4a2e1784f04d2cafb2a52daab63a" exitCode=0 Feb 26 22:10:03 crc kubenswrapper[4910]: I0226 22:10:03.737949 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535730-p75th" event={"ID":"72aeffcb-4b89-42c4-8be7-3ce49742f95d","Type":"ContainerDied","Data":"fb116b15d2b2cca3b1a55cde8eb00390b3eb4a2e1784f04d2cafb2a52daab63a"} Feb 26 22:10:05 crc kubenswrapper[4910]: I0226 22:10:05.036472 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535730-p75th" Feb 26 22:10:05 crc kubenswrapper[4910]: I0226 22:10:05.117969 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbpwd\" (UniqueName: \"kubernetes.io/projected/72aeffcb-4b89-42c4-8be7-3ce49742f95d-kube-api-access-tbpwd\") pod \"72aeffcb-4b89-42c4-8be7-3ce49742f95d\" (UID: \"72aeffcb-4b89-42c4-8be7-3ce49742f95d\") " Feb 26 22:10:05 crc kubenswrapper[4910]: I0226 22:10:05.126082 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72aeffcb-4b89-42c4-8be7-3ce49742f95d-kube-api-access-tbpwd" (OuterVolumeSpecName: "kube-api-access-tbpwd") pod "72aeffcb-4b89-42c4-8be7-3ce49742f95d" (UID: "72aeffcb-4b89-42c4-8be7-3ce49742f95d"). InnerVolumeSpecName "kube-api-access-tbpwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:10:05 crc kubenswrapper[4910]: I0226 22:10:05.219772 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbpwd\" (UniqueName: \"kubernetes.io/projected/72aeffcb-4b89-42c4-8be7-3ce49742f95d-kube-api-access-tbpwd\") on node \"crc\" DevicePath \"\"" Feb 26 22:10:05 crc kubenswrapper[4910]: I0226 22:10:05.757253 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535730-p75th" event={"ID":"72aeffcb-4b89-42c4-8be7-3ce49742f95d","Type":"ContainerDied","Data":"4d8a9ecd9c107b562dc8265c078d7d17c123e205fbbf6a8f77e3f640be12b7af"} Feb 26 22:10:05 crc kubenswrapper[4910]: I0226 22:10:05.757321 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d8a9ecd9c107b562dc8265c078d7d17c123e205fbbf6a8f77e3f640be12b7af" Feb 26 22:10:05 crc kubenswrapper[4910]: I0226 22:10:05.757398 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535730-p75th" Feb 26 22:10:06 crc kubenswrapper[4910]: I0226 22:10:06.106706 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535724-9h82v"] Feb 26 22:10:06 crc kubenswrapper[4910]: I0226 22:10:06.113932 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535724-9h82v"] Feb 26 22:10:07 crc kubenswrapper[4910]: I0226 22:10:07.914268 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416e31f3-d776-49a3-87b4-93e1b5224277" path="/var/lib/kubelet/pods/416e31f3-d776-49a3-87b4-93e1b5224277/volumes" Feb 26 22:10:12 crc kubenswrapper[4910]: I0226 22:10:12.904025 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds"] Feb 26 22:10:12 crc kubenswrapper[4910]: E0226 22:10:12.904359 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72aeffcb-4b89-42c4-8be7-3ce49742f95d" containerName="oc" Feb 26 22:10:12 crc kubenswrapper[4910]: I0226 22:10:12.904379 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="72aeffcb-4b89-42c4-8be7-3ce49742f95d" containerName="oc" Feb 26 22:10:12 crc kubenswrapper[4910]: I0226 22:10:12.904559 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="72aeffcb-4b89-42c4-8be7-3ce49742f95d" containerName="oc" Feb 26 22:10:12 crc kubenswrapper[4910]: I0226 22:10:12.905850 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:12 crc kubenswrapper[4910]: I0226 22:10:12.909117 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 22:10:12 crc kubenswrapper[4910]: I0226 22:10:12.919873 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds"] Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.032505 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.033142 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xb9c\" (UniqueName: \"kubernetes.io/projected/5c1b6f45-ddca-4044-8f64-46f03abaa37a-kube-api-access-6xb9c\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.033287 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.135107 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.135210 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xb9c\" (UniqueName: \"kubernetes.io/projected/5c1b6f45-ddca-4044-8f64-46f03abaa37a-kube-api-access-6xb9c\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.135251 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.135758 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.136013 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.158827 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xb9c\" (UniqueName: \"kubernetes.io/projected/5c1b6f45-ddca-4044-8f64-46f03abaa37a-kube-api-access-6xb9c\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.236269 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.494539 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds"] Feb 26 22:10:13 crc kubenswrapper[4910]: I0226 22:10:13.817661 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" event={"ID":"5c1b6f45-ddca-4044-8f64-46f03abaa37a","Type":"ContainerStarted","Data":"e119a726b89c3919ece61f6eb3d94810b6832caf834c96ce1db7f3af0676747c"} Feb 26 22:10:14 crc kubenswrapper[4910]: I0226 22:10:14.826111 4910 generic.go:334] "Generic (PLEG): container finished" podID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerID="baf9006f797871f26813f24976da46be0c38efb3398f4317966d8819ee8aa9e7" exitCode=0 Feb 26 22:10:14 crc kubenswrapper[4910]: I0226 22:10:14.826150 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" event={"ID":"5c1b6f45-ddca-4044-8f64-46f03abaa37a","Type":"ContainerDied","Data":"baf9006f797871f26813f24976da46be0c38efb3398f4317966d8819ee8aa9e7"} Feb 26 22:10:14 crc kubenswrapper[4910]: I0226 22:10:14.960603 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 26 22:10:14 crc kubenswrapper[4910]: I0226 22:10:14.961555 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 26 22:10:14 crc kubenswrapper[4910]: I0226 22:10:14.965751 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 26 22:10:14 crc kubenswrapper[4910]: I0226 22:10:14.966966 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 26 22:10:14 crc kubenswrapper[4910]: I0226 22:10:14.974147 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.058591 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m62n6\" (UniqueName: \"kubernetes.io/projected/52c7873b-500f-4a94-9c6e-92220acb6ed4-kube-api-access-m62n6\") pod \"minio\" (UID: \"52c7873b-500f-4a94-9c6e-92220acb6ed4\") " pod="minio-dev/minio" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.058726 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f9a5ba47-c634-4648-b489-f9a61ce6c396\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9a5ba47-c634-4648-b489-f9a61ce6c396\") pod \"minio\" (UID: \"52c7873b-500f-4a94-9c6e-92220acb6ed4\") " pod="minio-dev/minio" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.159456 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m62n6\" (UniqueName: \"kubernetes.io/projected/52c7873b-500f-4a94-9c6e-92220acb6ed4-kube-api-access-m62n6\") pod \"minio\" (UID: \"52c7873b-500f-4a94-9c6e-92220acb6ed4\") " pod="minio-dev/minio" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.159520 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f9a5ba47-c634-4648-b489-f9a61ce6c396\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9a5ba47-c634-4648-b489-f9a61ce6c396\") pod \"minio\" (UID: \"52c7873b-500f-4a94-9c6e-92220acb6ed4\") " pod="minio-dev/minio" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.162573 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.162756 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f9a5ba47-c634-4648-b489-f9a61ce6c396\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9a5ba47-c634-4648-b489-f9a61ce6c396\") pod \"minio\" (UID: \"52c7873b-500f-4a94-9c6e-92220acb6ed4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3f95de1534ad4d6fec07321bf6eaabeb9ebbd717a30e1149abbb76b6ff359817/globalmount\"" pod="minio-dev/minio" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.194840 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m62n6\" (UniqueName: \"kubernetes.io/projected/52c7873b-500f-4a94-9c6e-92220acb6ed4-kube-api-access-m62n6\") pod \"minio\" (UID: \"52c7873b-500f-4a94-9c6e-92220acb6ed4\") " pod="minio-dev/minio" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.196757 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f9a5ba47-c634-4648-b489-f9a61ce6c396\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9a5ba47-c634-4648-b489-f9a61ce6c396\") pod \"minio\" (UID: \"52c7873b-500f-4a94-9c6e-92220acb6ed4\") " pod="minio-dev/minio" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.254773 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8t5g"] Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.256646 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.262483 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8t5g"] Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.323842 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.361614 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-utilities\") pod \"redhat-operators-s8t5g\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.361970 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzp4v\" (UniqueName: \"kubernetes.io/projected/c9714152-a1fb-4b2e-848e-faac6734e81a-kube-api-access-jzp4v\") pod \"redhat-operators-s8t5g\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.362034 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-catalog-content\") pod \"redhat-operators-s8t5g\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.463141 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzp4v\" (UniqueName: \"kubernetes.io/projected/c9714152-a1fb-4b2e-848e-faac6734e81a-kube-api-access-jzp4v\") pod \"redhat-operators-s8t5g\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.463234 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-catalog-content\") pod \"redhat-operators-s8t5g\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.463255 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-utilities\") pod \"redhat-operators-s8t5g\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.463654 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-utilities\") pod \"redhat-operators-s8t5g\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.463757 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-catalog-content\") pod \"redhat-operators-s8t5g\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.497060 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzp4v\" (UniqueName: \"kubernetes.io/projected/c9714152-a1fb-4b2e-848e-faac6734e81a-kube-api-access-jzp4v\") pod \"redhat-operators-s8t5g\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.579239 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.588455 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 26 22:10:15 crc kubenswrapper[4910]: W0226 22:10:15.596832 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52c7873b_500f_4a94_9c6e_92220acb6ed4.slice/crio-66de8d2d4f6ba0a124fe4233949d4cdc99628d0c727883ae22de03835af166be WatchSource:0}: Error finding container 66de8d2d4f6ba0a124fe4233949d4cdc99628d0c727883ae22de03835af166be: Status 404 returned error can't find the container with id 66de8d2d4f6ba0a124fe4233949d4cdc99628d0c727883ae22de03835af166be Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.761682 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8t5g"] Feb 26 22:10:15 crc kubenswrapper[4910]: W0226 22:10:15.769260 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9714152_a1fb_4b2e_848e_faac6734e81a.slice/crio-3b9c23f3c13a0f7a4bf45481324f51e29d4dea2e307c5912370fd388ab559e70 WatchSource:0}: Error finding container 3b9c23f3c13a0f7a4bf45481324f51e29d4dea2e307c5912370fd388ab559e70: Status 404 returned error can't find the container with id 3b9c23f3c13a0f7a4bf45481324f51e29d4dea2e307c5912370fd388ab559e70 Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.837445 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8t5g" event={"ID":"c9714152-a1fb-4b2e-848e-faac6734e81a","Type":"ContainerStarted","Data":"3b9c23f3c13a0f7a4bf45481324f51e29d4dea2e307c5912370fd388ab559e70"} Feb 26 22:10:15 crc kubenswrapper[4910]: I0226 22:10:15.838759 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"52c7873b-500f-4a94-9c6e-92220acb6ed4","Type":"ContainerStarted","Data":"66de8d2d4f6ba0a124fe4233949d4cdc99628d0c727883ae22de03835af166be"} Feb 26 22:10:16 crc kubenswrapper[4910]: E0226 22:10:16.038212 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9714152_a1fb_4b2e_848e_faac6734e81a.slice/crio-conmon-6376f3075b8dac7e20a423d13e8a6ac034c514b5c7c9846c4c527e5463d6abaa.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:10:16 crc kubenswrapper[4910]: I0226 22:10:16.613655 4910 scope.go:117] "RemoveContainer" containerID="f061dfcf5c00d7aee7ab315a83af62774581b9f473a6ef1c82b63f92bc067538" Feb 26 22:10:16 crc kubenswrapper[4910]: I0226 22:10:16.858643 4910 generic.go:334] "Generic (PLEG): container finished" podID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerID="6376f3075b8dac7e20a423d13e8a6ac034c514b5c7c9846c4c527e5463d6abaa" exitCode=0 Feb 26 22:10:16 crc kubenswrapper[4910]: I0226 22:10:16.858708 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8t5g" event={"ID":"c9714152-a1fb-4b2e-848e-faac6734e81a","Type":"ContainerDied","Data":"6376f3075b8dac7e20a423d13e8a6ac034c514b5c7c9846c4c527e5463d6abaa"} Feb 26 22:10:16 crc kubenswrapper[4910]: I0226 22:10:16.861441 4910 generic.go:334] "Generic (PLEG): container finished" podID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerID="6248aeaa3337ff9593bf39442ad4e035e49890dcc3e3f584795fe25b742106ce" exitCode=0 Feb 26 22:10:16 crc kubenswrapper[4910]: I0226 22:10:16.861468 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" event={"ID":"5c1b6f45-ddca-4044-8f64-46f03abaa37a","Type":"ContainerDied","Data":"6248aeaa3337ff9593bf39442ad4e035e49890dcc3e3f584795fe25b742106ce"} Feb 26 22:10:19 crc kubenswrapper[4910]: I0226 22:10:19.908637 4910 generic.go:334] "Generic (PLEG): container finished" podID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerID="cedc430fbecf09f79b408c39119ce593a1993183dfea4611233e722088d8662b" exitCode=0 Feb 26 22:10:19 crc kubenswrapper[4910]: I0226 22:10:19.919528 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8t5g" event={"ID":"c9714152-a1fb-4b2e-848e-faac6734e81a","Type":"ContainerStarted","Data":"50a6b7581d6f8c5db3f150137768f1314de3ace62409cac52a0b448fe3d01819"} Feb 26 22:10:19 crc kubenswrapper[4910]: I0226 22:10:19.919592 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"52c7873b-500f-4a94-9c6e-92220acb6ed4","Type":"ContainerStarted","Data":"2f574bc8be351d83634aa71315b4842f44278fe4a810883926cae070db1a0ba1"} Feb 26 22:10:19 crc kubenswrapper[4910]: I0226 22:10:19.919613 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" event={"ID":"5c1b6f45-ddca-4044-8f64-46f03abaa37a","Type":"ContainerDied","Data":"cedc430fbecf09f79b408c39119ce593a1993183dfea4611233e722088d8662b"} Feb 26 22:10:19 crc kubenswrapper[4910]: I0226 22:10:19.948527 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.440876836 podStartE2EDuration="7.948491158s" podCreationTimestamp="2026-02-26 22:10:12 +0000 UTC" firstStartedPulling="2026-02-26 22:10:15.600469077 +0000 UTC m=+900.679959628" lastFinishedPulling="2026-02-26 22:10:19.108083409 +0000 UTC m=+904.187573950" observedRunningTime="2026-02-26 22:10:19.941039876 +0000 UTC m=+905.020530417" watchObservedRunningTime="2026-02-26 22:10:19.948491158 +0000 UTC m=+905.027981739" Feb 26 22:10:20 crc kubenswrapper[4910]: I0226 22:10:20.920958 4910 generic.go:334] "Generic (PLEG): container finished" podID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerID="50a6b7581d6f8c5db3f150137768f1314de3ace62409cac52a0b448fe3d01819" exitCode=0 Feb 26 22:10:20 crc kubenswrapper[4910]: I0226 22:10:20.921106 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8t5g" event={"ID":"c9714152-a1fb-4b2e-848e-faac6734e81a","Type":"ContainerDied","Data":"50a6b7581d6f8c5db3f150137768f1314de3ace62409cac52a0b448fe3d01819"} Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.273239 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.341866 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xb9c\" (UniqueName: \"kubernetes.io/projected/5c1b6f45-ddca-4044-8f64-46f03abaa37a-kube-api-access-6xb9c\") pod \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.341966 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-bundle\") pod \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.342017 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-util\") pod \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\" (UID: \"5c1b6f45-ddca-4044-8f64-46f03abaa37a\") " Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.343053 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-bundle" (OuterVolumeSpecName: "bundle") pod "5c1b6f45-ddca-4044-8f64-46f03abaa37a" (UID: "5c1b6f45-ddca-4044-8f64-46f03abaa37a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.349493 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1b6f45-ddca-4044-8f64-46f03abaa37a-kube-api-access-6xb9c" (OuterVolumeSpecName: "kube-api-access-6xb9c") pod "5c1b6f45-ddca-4044-8f64-46f03abaa37a" (UID: "5c1b6f45-ddca-4044-8f64-46f03abaa37a"). InnerVolumeSpecName "kube-api-access-6xb9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.352746 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-util" (OuterVolumeSpecName: "util") pod "5c1b6f45-ddca-4044-8f64-46f03abaa37a" (UID: "5c1b6f45-ddca-4044-8f64-46f03abaa37a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.443123 4910 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.443269 4910 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1b6f45-ddca-4044-8f64-46f03abaa37a-util\") on node \"crc\" DevicePath \"\"" Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.443295 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xb9c\" (UniqueName: \"kubernetes.io/projected/5c1b6f45-ddca-4044-8f64-46f03abaa37a-kube-api-access-6xb9c\") on node \"crc\" DevicePath \"\"" Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.933036 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" event={"ID":"5c1b6f45-ddca-4044-8f64-46f03abaa37a","Type":"ContainerDied","Data":"e119a726b89c3919ece61f6eb3d94810b6832caf834c96ce1db7f3af0676747c"} Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.933083 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e119a726b89c3919ece61f6eb3d94810b6832caf834c96ce1db7f3af0676747c" Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.935096 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds" Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.936101 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8t5g" event={"ID":"c9714152-a1fb-4b2e-848e-faac6734e81a","Type":"ContainerStarted","Data":"b6d571fa922a017940dcc7af01fc2b48ac2636d5245d3c1421765c31d6b6689f"} Feb 26 22:10:21 crc kubenswrapper[4910]: I0226 22:10:21.970683 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8t5g" podStartSLOduration=3.026457169 podStartE2EDuration="6.970660028s" podCreationTimestamp="2026-02-26 22:10:15 +0000 UTC" firstStartedPulling="2026-02-26 22:10:17.41030927 +0000 UTC m=+902.489799801" lastFinishedPulling="2026-02-26 22:10:21.354512119 +0000 UTC m=+906.434002660" observedRunningTime="2026-02-26 22:10:21.964063639 +0000 UTC m=+907.043554220" watchObservedRunningTime="2026-02-26 22:10:21.970660028 +0000 UTC m=+907.050150609" Feb 26 22:10:25 crc kubenswrapper[4910]: I0226 22:10:25.579879 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:25 crc kubenswrapper[4910]: I0226 22:10:25.580509 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:25 crc kubenswrapper[4910]: I0226 22:10:25.727960 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:10:25 crc kubenswrapper[4910]: I0226 22:10:25.728051 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:10:26 crc kubenswrapper[4910]: E0226 22:10:26.157450 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9714152_a1fb_4b2e_848e_faac6734e81a.slice/crio-conmon-6376f3075b8dac7e20a423d13e8a6ac034c514b5c7c9846c4c527e5463d6abaa.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:10:26 crc kubenswrapper[4910]: I0226 22:10:26.636702 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s8t5g" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerName="registry-server" probeResult="failure" output=< Feb 26 22:10:26 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:10:26 crc kubenswrapper[4910]: > Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.075456 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr"] Feb 26 22:10:28 crc kubenswrapper[4910]: E0226 22:10:28.076672 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerName="util" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.077305 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerName="util" Feb 26 22:10:28 crc kubenswrapper[4910]: E0226 22:10:28.077402 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerName="extract" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.077487 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerName="extract" Feb 26 22:10:28 crc kubenswrapper[4910]: E0226 22:10:28.077573 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerName="pull" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.077644 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerName="pull" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.077859 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1b6f45-ddca-4044-8f64-46f03abaa37a" containerName="extract" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.078874 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.081006 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.081427 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.081584 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.081670 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-hmndq" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.082435 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.087398 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.095209 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr"] Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.128992 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31762fdd-32c2-4dd0-b121-814205d69874-webhook-cert\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.129323 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/31762fdd-32c2-4dd0-b121-814205d69874-manager-config\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.129447 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-554wd\" (UniqueName: \"kubernetes.io/projected/31762fdd-32c2-4dd0-b121-814205d69874-kube-api-access-554wd\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.129573 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31762fdd-32c2-4dd0-b121-814205d69874-apiservice-cert\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.129714 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31762fdd-32c2-4dd0-b121-814205d69874-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.230720 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31762fdd-32c2-4dd0-b121-814205d69874-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.230776 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31762fdd-32c2-4dd0-b121-814205d69874-webhook-cert\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.230817 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/31762fdd-32c2-4dd0-b121-814205d69874-manager-config\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.230836 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-554wd\" (UniqueName: \"kubernetes.io/projected/31762fdd-32c2-4dd0-b121-814205d69874-kube-api-access-554wd\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.230886 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31762fdd-32c2-4dd0-b121-814205d69874-apiservice-cert\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.232012 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/31762fdd-32c2-4dd0-b121-814205d69874-manager-config\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.236072 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31762fdd-32c2-4dd0-b121-814205d69874-apiservice-cert\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.236348 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31762fdd-32c2-4dd0-b121-814205d69874-webhook-cert\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.252722 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31762fdd-32c2-4dd0-b121-814205d69874-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.254549 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-554wd\" (UniqueName: \"kubernetes.io/projected/31762fdd-32c2-4dd0-b121-814205d69874-kube-api-access-554wd\") pod \"loki-operator-controller-manager-6f8f9c794b-f7dsr\" (UID: \"31762fdd-32c2-4dd0-b121-814205d69874\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.395616 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.623010 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr"] Feb 26 22:10:28 crc kubenswrapper[4910]: W0226 22:10:28.635564 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31762fdd_32c2_4dd0_b121_814205d69874.slice/crio-4af715100161a96a9ae8814caa824adb5356b87fcba4b9d6c6a86f961b5d4d59 WatchSource:0}: Error finding container 4af715100161a96a9ae8814caa824adb5356b87fcba4b9d6c6a86f961b5d4d59: Status 404 returned error can't find the container with id 4af715100161a96a9ae8814caa824adb5356b87fcba4b9d6c6a86f961b5d4d59 Feb 26 22:10:28 crc kubenswrapper[4910]: I0226 22:10:28.988377 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" event={"ID":"31762fdd-32c2-4dd0-b121-814205d69874","Type":"ContainerStarted","Data":"4af715100161a96a9ae8814caa824adb5356b87fcba4b9d6c6a86f961b5d4d59"} Feb 26 22:10:35 crc kubenswrapper[4910]: I0226 22:10:35.031144 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" event={"ID":"31762fdd-32c2-4dd0-b121-814205d69874","Type":"ContainerStarted","Data":"08ad140ce3a7b7247d23d47b57de11c7bdd80dade0e657149a5434ee91d12100"} Feb 26 22:10:35 crc kubenswrapper[4910]: I0226 22:10:35.644418 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:35 crc kubenswrapper[4910]: I0226 22:10:35.689557 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:36 crc kubenswrapper[4910]: E0226 22:10:36.313309 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9714152_a1fb_4b2e_848e_faac6734e81a.slice/crio-conmon-6376f3075b8dac7e20a423d13e8a6ac034c514b5c7c9846c4c527e5463d6abaa.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:10:36 crc kubenswrapper[4910]: I0226 22:10:36.641459 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8t5g"] Feb 26 22:10:37 crc kubenswrapper[4910]: I0226 22:10:37.043214 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s8t5g" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerName="registry-server" containerID="cri-o://b6d571fa922a017940dcc7af01fc2b48ac2636d5245d3c1421765c31d6b6689f" gracePeriod=2 Feb 26 22:10:38 crc kubenswrapper[4910]: I0226 22:10:38.052841 4910 generic.go:334] "Generic (PLEG): container finished" podID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerID="b6d571fa922a017940dcc7af01fc2b48ac2636d5245d3c1421765c31d6b6689f" exitCode=0 Feb 26 22:10:38 crc kubenswrapper[4910]: I0226 22:10:38.052954 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8t5g" event={"ID":"c9714152-a1fb-4b2e-848e-faac6734e81a","Type":"ContainerDied","Data":"b6d571fa922a017940dcc7af01fc2b48ac2636d5245d3c1421765c31d6b6689f"} Feb 26 22:10:39 crc kubenswrapper[4910]: I0226 22:10:39.917664 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:39 crc kubenswrapper[4910]: I0226 22:10:39.998808 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-catalog-content\") pod \"c9714152-a1fb-4b2e-848e-faac6734e81a\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " Feb 26 22:10:39 crc kubenswrapper[4910]: I0226 22:10:39.998906 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-utilities\") pod \"c9714152-a1fb-4b2e-848e-faac6734e81a\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " Feb 26 22:10:39 crc kubenswrapper[4910]: I0226 22:10:39.998955 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzp4v\" (UniqueName: \"kubernetes.io/projected/c9714152-a1fb-4b2e-848e-faac6734e81a-kube-api-access-jzp4v\") pod \"c9714152-a1fb-4b2e-848e-faac6734e81a\" (UID: \"c9714152-a1fb-4b2e-848e-faac6734e81a\") " Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.000859 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-utilities" (OuterVolumeSpecName: "utilities") pod "c9714152-a1fb-4b2e-848e-faac6734e81a" (UID: "c9714152-a1fb-4b2e-848e-faac6734e81a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.004151 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9714152-a1fb-4b2e-848e-faac6734e81a-kube-api-access-jzp4v" (OuterVolumeSpecName: "kube-api-access-jzp4v") pod "c9714152-a1fb-4b2e-848e-faac6734e81a" (UID: "c9714152-a1fb-4b2e-848e-faac6734e81a"). InnerVolumeSpecName "kube-api-access-jzp4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.072998 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8t5g" event={"ID":"c9714152-a1fb-4b2e-848e-faac6734e81a","Type":"ContainerDied","Data":"3b9c23f3c13a0f7a4bf45481324f51e29d4dea2e307c5912370fd388ab559e70"} Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.073047 4910 scope.go:117] "RemoveContainer" containerID="b6d571fa922a017940dcc7af01fc2b48ac2636d5245d3c1421765c31d6b6689f" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.073114 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8t5g" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.095346 4910 scope.go:117] "RemoveContainer" containerID="50a6b7581d6f8c5db3f150137768f1314de3ace62409cac52a0b448fe3d01819" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.099850 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.099883 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzp4v\" (UniqueName: \"kubernetes.io/projected/c9714152-a1fb-4b2e-848e-faac6734e81a-kube-api-access-jzp4v\") on node \"crc\" DevicePath \"\"" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.116976 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9714152-a1fb-4b2e-848e-faac6734e81a" (UID: "c9714152-a1fb-4b2e-848e-faac6734e81a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.119846 4910 scope.go:117] "RemoveContainer" containerID="6376f3075b8dac7e20a423d13e8a6ac034c514b5c7c9846c4c527e5463d6abaa" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.201421 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9714152-a1fb-4b2e-848e-faac6734e81a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.411585 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8t5g"] Feb 26 22:10:40 crc kubenswrapper[4910]: I0226 22:10:40.418313 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s8t5g"] Feb 26 22:10:41 crc kubenswrapper[4910]: I0226 22:10:41.085323 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" event={"ID":"31762fdd-32c2-4dd0-b121-814205d69874","Type":"ContainerStarted","Data":"a6255bd7a5a9c31188294534e6482ee7f730d550ff27400eb396e4ef2237cba9"} Feb 26 22:10:41 crc kubenswrapper[4910]: I0226 22:10:41.085675 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:41 crc kubenswrapper[4910]: I0226 22:10:41.088980 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" Feb 26 22:10:41 crc kubenswrapper[4910]: I0226 22:10:41.112564 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6f8f9c794b-f7dsr" podStartSLOduration=1.79309664 podStartE2EDuration="13.112506336s" podCreationTimestamp="2026-02-26 22:10:28 +0000 UTC" firstStartedPulling="2026-02-26 22:10:28.638360992 +0000 UTC m=+913.717851533" lastFinishedPulling="2026-02-26 22:10:39.957770688 +0000 UTC m=+925.037261229" observedRunningTime="2026-02-26 22:10:41.103478022 +0000 UTC m=+926.182968573" watchObservedRunningTime="2026-02-26 22:10:41.112506336 +0000 UTC m=+926.191996917" Feb 26 22:10:41 crc kubenswrapper[4910]: I0226 22:10:41.913195 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" path="/var/lib/kubelet/pods/c9714152-a1fb-4b2e-848e-faac6734e81a/volumes" Feb 26 22:10:46 crc kubenswrapper[4910]: E0226 22:10:46.441387 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9714152_a1fb_4b2e_848e_faac6734e81a.slice/crio-conmon-6376f3075b8dac7e20a423d13e8a6ac034c514b5c7c9846c4c527e5463d6abaa.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.092929 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8p2wr"] Feb 26 22:10:53 crc kubenswrapper[4910]: E0226 22:10:53.093921 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerName="registry-server" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.093948 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerName="registry-server" Feb 26 22:10:53 crc kubenswrapper[4910]: E0226 22:10:53.093969 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerName="extract-utilities" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.093985 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerName="extract-utilities" Feb 26 22:10:53 crc kubenswrapper[4910]: E0226 22:10:53.094015 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerName="extract-content" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.094031 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerName="extract-content" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.094873 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9714152-a1fb-4b2e-848e-faac6734e81a" containerName="registry-server" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.096723 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.113303 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p2wr"] Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.277397 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqthh\" (UniqueName: \"kubernetes.io/projected/65974607-b4a9-44fc-bc65-03e18bf869b2-kube-api-access-jqthh\") pod \"redhat-marketplace-8p2wr\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.277478 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-utilities\") pod \"redhat-marketplace-8p2wr\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.277505 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-catalog-content\") pod \"redhat-marketplace-8p2wr\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.378617 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqthh\" (UniqueName: \"kubernetes.io/projected/65974607-b4a9-44fc-bc65-03e18bf869b2-kube-api-access-jqthh\") pod \"redhat-marketplace-8p2wr\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.378692 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-utilities\") pod \"redhat-marketplace-8p2wr\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.378713 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-catalog-content\") pod \"redhat-marketplace-8p2wr\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.379132 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-catalog-content\") pod \"redhat-marketplace-8p2wr\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.379505 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-utilities\") pod \"redhat-marketplace-8p2wr\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.400779 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqthh\" (UniqueName: \"kubernetes.io/projected/65974607-b4a9-44fc-bc65-03e18bf869b2-kube-api-access-jqthh\") pod \"redhat-marketplace-8p2wr\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.449435 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:10:53 crc kubenswrapper[4910]: I0226 22:10:53.711571 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p2wr"] Feb 26 22:10:54 crc kubenswrapper[4910]: I0226 22:10:54.176554 4910 generic.go:334] "Generic (PLEG): container finished" podID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerID="5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7" exitCode=0 Feb 26 22:10:54 crc kubenswrapper[4910]: I0226 22:10:54.176797 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p2wr" event={"ID":"65974607-b4a9-44fc-bc65-03e18bf869b2","Type":"ContainerDied","Data":"5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7"} Feb 26 22:10:54 crc kubenswrapper[4910]: I0226 22:10:54.176820 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p2wr" event={"ID":"65974607-b4a9-44fc-bc65-03e18bf869b2","Type":"ContainerStarted","Data":"babd4055dc04c8504aa1f1610ea2a244c4f5de30f8cb0128d06d62891fc08e40"} Feb 26 22:10:55 crc kubenswrapper[4910]: I0226 22:10:55.188567 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p2wr" event={"ID":"65974607-b4a9-44fc-bc65-03e18bf869b2","Type":"ContainerStarted","Data":"b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c"} Feb 26 22:10:55 crc kubenswrapper[4910]: I0226 22:10:55.727571 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:10:55 crc kubenswrapper[4910]: I0226 22:10:55.727671 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:10:55 crc kubenswrapper[4910]: I0226 22:10:55.727748 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:10:55 crc kubenswrapper[4910]: I0226 22:10:55.728865 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8aa69230a8076fd0ec023976cd59eeefa38746d90bf2e2c7d3f40e007a0afc9"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:10:55 crc kubenswrapper[4910]: I0226 22:10:55.728992 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://b8aa69230a8076fd0ec023976cd59eeefa38746d90bf2e2c7d3f40e007a0afc9" gracePeriod=600 Feb 26 22:10:56 crc kubenswrapper[4910]: I0226 22:10:56.197595 4910 generic.go:334] "Generic (PLEG): container finished" podID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerID="b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c" exitCode=0 Feb 26 22:10:56 crc kubenswrapper[4910]: I0226 22:10:56.197685 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p2wr" event={"ID":"65974607-b4a9-44fc-bc65-03e18bf869b2","Type":"ContainerDied","Data":"b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c"} Feb 26 22:10:56 crc kubenswrapper[4910]: I0226 22:10:56.206864 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="b8aa69230a8076fd0ec023976cd59eeefa38746d90bf2e2c7d3f40e007a0afc9" exitCode=0 Feb 26 22:10:56 crc kubenswrapper[4910]: I0226 22:10:56.207104 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"b8aa69230a8076fd0ec023976cd59eeefa38746d90bf2e2c7d3f40e007a0afc9"} Feb 26 22:10:56 crc kubenswrapper[4910]: I0226 22:10:56.207278 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"8ea55f05369c1b8e8cc6600dec4dd7568856f1c31173a49e14886d4d1e1c338d"} Feb 26 22:10:56 crc kubenswrapper[4910]: I0226 22:10:56.207311 4910 scope.go:117] "RemoveContainer" containerID="3d1f6db3407c868dd446acc51c6527a528eccc7a840c7986ab6697c6b21634ed" Feb 26 22:10:56 crc kubenswrapper[4910]: E0226 22:10:56.608441 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9714152_a1fb_4b2e_848e_faac6734e81a.slice/crio-conmon-6376f3075b8dac7e20a423d13e8a6ac034c514b5c7c9846c4c527e5463d6abaa.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:10:57 crc kubenswrapper[4910]: I0226 22:10:57.219879 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p2wr" event={"ID":"65974607-b4a9-44fc-bc65-03e18bf869b2","Type":"ContainerStarted","Data":"dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4"} Feb 26 22:10:57 crc kubenswrapper[4910]: I0226 22:10:57.245388 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8p2wr" podStartSLOduration=1.825042965 podStartE2EDuration="4.245374053s" podCreationTimestamp="2026-02-26 22:10:53 +0000 UTC" firstStartedPulling="2026-02-26 22:10:54.179128359 +0000 UTC m=+939.258618900" lastFinishedPulling="2026-02-26 22:10:56.599459437 +0000 UTC m=+941.678949988" observedRunningTime="2026-02-26 22:10:57.243203193 +0000 UTC m=+942.322693764" watchObservedRunningTime="2026-02-26 22:10:57.245374053 +0000 UTC m=+942.324864594" Feb 26 22:11:03 crc kubenswrapper[4910]: I0226 22:11:03.450624 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:11:03 crc kubenswrapper[4910]: I0226 22:11:03.451241 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:11:03 crc kubenswrapper[4910]: I0226 22:11:03.519142 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:11:04 crc kubenswrapper[4910]: I0226 22:11:04.341501 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:11:04 crc kubenswrapper[4910]: I0226 22:11:04.400916 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p2wr"] Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.291421 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8p2wr" podUID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerName="registry-server" containerID="cri-o://dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4" gracePeriod=2 Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.747006 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:11:06 crc kubenswrapper[4910]: E0226 22:11:06.756596 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9714152_a1fb_4b2e_848e_faac6734e81a.slice/crio-conmon-6376f3075b8dac7e20a423d13e8a6ac034c514b5c7c9846c4c527e5463d6abaa.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.889980 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-utilities\") pod \"65974607-b4a9-44fc-bc65-03e18bf869b2\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.890102 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqthh\" (UniqueName: \"kubernetes.io/projected/65974607-b4a9-44fc-bc65-03e18bf869b2-kube-api-access-jqthh\") pod \"65974607-b4a9-44fc-bc65-03e18bf869b2\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.891735 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-utilities" (OuterVolumeSpecName: "utilities") pod "65974607-b4a9-44fc-bc65-03e18bf869b2" (UID: "65974607-b4a9-44fc-bc65-03e18bf869b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.891934 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-catalog-content\") pod \"65974607-b4a9-44fc-bc65-03e18bf869b2\" (UID: \"65974607-b4a9-44fc-bc65-03e18bf869b2\") " Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.892469 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.900433 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65974607-b4a9-44fc-bc65-03e18bf869b2-kube-api-access-jqthh" (OuterVolumeSpecName: "kube-api-access-jqthh") pod "65974607-b4a9-44fc-bc65-03e18bf869b2" (UID: "65974607-b4a9-44fc-bc65-03e18bf869b2"). InnerVolumeSpecName "kube-api-access-jqthh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.953306 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65974607-b4a9-44fc-bc65-03e18bf869b2" (UID: "65974607-b4a9-44fc-bc65-03e18bf869b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.994342 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqthh\" (UniqueName: \"kubernetes.io/projected/65974607-b4a9-44fc-bc65-03e18bf869b2-kube-api-access-jqthh\") on node \"crc\" DevicePath \"\"" Feb 26 22:11:06 crc kubenswrapper[4910]: I0226 22:11:06.994391 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65974607-b4a9-44fc-bc65-03e18bf869b2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.303681 4910 generic.go:334] "Generic (PLEG): container finished" podID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerID="dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4" exitCode=0 Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.303766 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p2wr" event={"ID":"65974607-b4a9-44fc-bc65-03e18bf869b2","Type":"ContainerDied","Data":"dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4"} Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.303833 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p2wr" event={"ID":"65974607-b4a9-44fc-bc65-03e18bf869b2","Type":"ContainerDied","Data":"babd4055dc04c8504aa1f1610ea2a244c4f5de30f8cb0128d06d62891fc08e40"} Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.303841 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p2wr" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.303867 4910 scope.go:117] "RemoveContainer" containerID="dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.328981 4910 scope.go:117] "RemoveContainer" containerID="b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.352805 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p2wr"] Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.359628 4910 scope.go:117] "RemoveContainer" containerID="5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.363621 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p2wr"] Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.385540 4910 scope.go:117] "RemoveContainer" containerID="dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4" Feb 26 22:11:07 crc kubenswrapper[4910]: E0226 22:11:07.386021 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4\": container with ID starting with dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4 not found: ID does not exist" containerID="dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.386090 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4"} err="failed to get container status \"dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4\": rpc error: code = NotFound desc = could not find container \"dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4\": container with ID starting with dd2f07957cd5df0151d683843dfd106c1829f5de8bbbf54d8532f854e87a8cf4 not found: ID does not exist" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.386135 4910 scope.go:117] "RemoveContainer" containerID="b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c" Feb 26 22:11:07 crc kubenswrapper[4910]: E0226 22:11:07.386483 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c\": container with ID starting with b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c not found: ID does not exist" containerID="b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.386523 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c"} err="failed to get container status \"b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c\": rpc error: code = NotFound desc = could not find container \"b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c\": container with ID starting with b765bececc33d226f238c045f7cf099e41a34f6e58fd9bf382c84b5e827d389c not found: ID does not exist" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.386550 4910 scope.go:117] "RemoveContainer" containerID="5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7" Feb 26 22:11:07 crc kubenswrapper[4910]: E0226 22:11:07.387497 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7\": container with ID starting with 5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7 not found: ID does not exist" containerID="5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.387554 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7"} err="failed to get container status \"5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7\": rpc error: code = NotFound desc = could not find container \"5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7\": container with ID starting with 5331c0ca432f464b5005b8ab815873cb51516c7f0bf057acf82fe6f084af72b7 not found: ID does not exist" Feb 26 22:11:07 crc kubenswrapper[4910]: I0226 22:11:07.915971 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65974607-b4a9-44fc-bc65-03e18bf869b2" path="/var/lib/kubelet/pods/65974607-b4a9-44fc-bc65-03e18bf869b2/volumes" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.797471 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d"] Feb 26 22:11:10 crc kubenswrapper[4910]: E0226 22:11:10.797696 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerName="extract-content" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.797707 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerName="extract-content" Feb 26 22:11:10 crc kubenswrapper[4910]: E0226 22:11:10.797723 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerName="registry-server" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.797729 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerName="registry-server" Feb 26 22:11:10 crc kubenswrapper[4910]: E0226 22:11:10.797740 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerName="extract-utilities" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.797746 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerName="extract-utilities" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.797859 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="65974607-b4a9-44fc-bc65-03e18bf869b2" containerName="registry-server" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.798587 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.803662 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.810350 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d"] Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.849398 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.849464 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs5nw\" (UniqueName: \"kubernetes.io/projected/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-kube-api-access-qs5nw\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.849712 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.951033 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.951127 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.951213 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs5nw\" (UniqueName: \"kubernetes.io/projected/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-kube-api-access-qs5nw\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.951663 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.951710 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:10 crc kubenswrapper[4910]: I0226 22:11:10.972594 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs5nw\" (UniqueName: \"kubernetes.io/projected/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-kube-api-access-qs5nw\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:11 crc kubenswrapper[4910]: I0226 22:11:11.114487 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:11 crc kubenswrapper[4910]: I0226 22:11:11.374110 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d"] Feb 26 22:11:12 crc kubenswrapper[4910]: I0226 22:11:12.345093 4910 generic.go:334] "Generic (PLEG): container finished" podID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerID="f859f82ebbcf2e836202e378a4edc38225d80d289aebe03ab8d95d2c21149019" exitCode=0 Feb 26 22:11:12 crc kubenswrapper[4910]: I0226 22:11:12.345201 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" event={"ID":"bebc2749-d3eb-4894-8c1b-14271b6c1f9c","Type":"ContainerDied","Data":"f859f82ebbcf2e836202e378a4edc38225d80d289aebe03ab8d95d2c21149019"} Feb 26 22:11:12 crc kubenswrapper[4910]: I0226 22:11:12.345254 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" event={"ID":"bebc2749-d3eb-4894-8c1b-14271b6c1f9c","Type":"ContainerStarted","Data":"171bcf9413f546727cf4feb076104054983ac0b5f46aa7945d5a407679abb8e7"} Feb 26 22:11:12 crc kubenswrapper[4910]: I0226 22:11:12.348516 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:11:14 crc kubenswrapper[4910]: I0226 22:11:14.364024 4910 generic.go:334] "Generic (PLEG): container finished" podID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerID="0dd4f0075fe28efb81676cc55e02cda96ed3d6c317a4e0eee2de688f7f3f6796" exitCode=0 Feb 26 22:11:14 crc kubenswrapper[4910]: I0226 22:11:14.364118 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" event={"ID":"bebc2749-d3eb-4894-8c1b-14271b6c1f9c","Type":"ContainerDied","Data":"0dd4f0075fe28efb81676cc55e02cda96ed3d6c317a4e0eee2de688f7f3f6796"} Feb 26 22:11:15 crc kubenswrapper[4910]: I0226 22:11:15.377294 4910 generic.go:334] "Generic (PLEG): container finished" podID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerID="0ec313c16e5383feb2ad375e1011ccab131737edaad1772561c581d842a0fae8" exitCode=0 Feb 26 22:11:15 crc kubenswrapper[4910]: I0226 22:11:15.377385 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" event={"ID":"bebc2749-d3eb-4894-8c1b-14271b6c1f9c","Type":"ContainerDied","Data":"0ec313c16e5383feb2ad375e1011ccab131737edaad1772561c581d842a0fae8"} Feb 26 22:11:16 crc kubenswrapper[4910]: I0226 22:11:16.708013 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:16 crc kubenswrapper[4910]: I0226 22:11:16.836764 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs5nw\" (UniqueName: \"kubernetes.io/projected/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-kube-api-access-qs5nw\") pod \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " Feb 26 22:11:16 crc kubenswrapper[4910]: I0226 22:11:16.836904 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-bundle\") pod \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " Feb 26 22:11:16 crc kubenswrapper[4910]: I0226 22:11:16.837392 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-util\") pod \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\" (UID: \"bebc2749-d3eb-4894-8c1b-14271b6c1f9c\") " Feb 26 22:11:16 crc kubenswrapper[4910]: I0226 22:11:16.838760 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-bundle" (OuterVolumeSpecName: "bundle") pod "bebc2749-d3eb-4894-8c1b-14271b6c1f9c" (UID: "bebc2749-d3eb-4894-8c1b-14271b6c1f9c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:11:16 crc kubenswrapper[4910]: I0226 22:11:16.845619 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-kube-api-access-qs5nw" (OuterVolumeSpecName: "kube-api-access-qs5nw") pod "bebc2749-d3eb-4894-8c1b-14271b6c1f9c" (UID: "bebc2749-d3eb-4894-8c1b-14271b6c1f9c"). InnerVolumeSpecName "kube-api-access-qs5nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:11:16 crc kubenswrapper[4910]: I0226 22:11:16.939915 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs5nw\" (UniqueName: \"kubernetes.io/projected/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-kube-api-access-qs5nw\") on node \"crc\" DevicePath \"\"" Feb 26 22:11:16 crc kubenswrapper[4910]: I0226 22:11:16.939975 4910 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:11:17 crc kubenswrapper[4910]: I0226 22:11:17.088308 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-util" (OuterVolumeSpecName: "util") pod "bebc2749-d3eb-4894-8c1b-14271b6c1f9c" (UID: "bebc2749-d3eb-4894-8c1b-14271b6c1f9c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:11:17 crc kubenswrapper[4910]: I0226 22:11:17.143101 4910 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bebc2749-d3eb-4894-8c1b-14271b6c1f9c-util\") on node \"crc\" DevicePath \"\"" Feb 26 22:11:17 crc kubenswrapper[4910]: I0226 22:11:17.398874 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" event={"ID":"bebc2749-d3eb-4894-8c1b-14271b6c1f9c","Type":"ContainerDied","Data":"171bcf9413f546727cf4feb076104054983ac0b5f46aa7945d5a407679abb8e7"} Feb 26 22:11:17 crc kubenswrapper[4910]: I0226 22:11:17.398927 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171bcf9413f546727cf4feb076104054983ac0b5f46aa7945d5a407679abb8e7" Feb 26 22:11:17 crc kubenswrapper[4910]: I0226 22:11:17.398960 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.364021 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx"] Feb 26 22:11:20 crc kubenswrapper[4910]: E0226 22:11:20.364751 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerName="pull" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.364772 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerName="pull" Feb 26 22:11:20 crc kubenswrapper[4910]: E0226 22:11:20.364795 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerName="extract" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.364807 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerName="extract" Feb 26 22:11:20 crc kubenswrapper[4910]: E0226 22:11:20.364828 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerName="util" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.364840 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerName="util" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.365018 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebc2749-d3eb-4894-8c1b-14271b6c1f9c" containerName="extract" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.365676 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.368072 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.369414 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-f4fx9" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.376783 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.387283 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx"] Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.486969 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5vll\" (UniqueName: \"kubernetes.io/projected/bb1bab0a-7f00-4ce2-9c96-5a5581cf3b89-kube-api-access-g5vll\") pod \"nmstate-operator-75c5dccd6c-rxpwx\" (UID: \"bb1bab0a-7f00-4ce2-9c96-5a5581cf3b89\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.588679 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5vll\" (UniqueName: \"kubernetes.io/projected/bb1bab0a-7f00-4ce2-9c96-5a5581cf3b89-kube-api-access-g5vll\") pod \"nmstate-operator-75c5dccd6c-rxpwx\" (UID: \"bb1bab0a-7f00-4ce2-9c96-5a5581cf3b89\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.626387 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5vll\" (UniqueName: \"kubernetes.io/projected/bb1bab0a-7f00-4ce2-9c96-5a5581cf3b89-kube-api-access-g5vll\") pod \"nmstate-operator-75c5dccd6c-rxpwx\" (UID: \"bb1bab0a-7f00-4ce2-9c96-5a5581cf3b89\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx" Feb 26 22:11:20 crc kubenswrapper[4910]: I0226 22:11:20.695414 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx" Feb 26 22:11:21 crc kubenswrapper[4910]: I0226 22:11:21.020875 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx"] Feb 26 22:11:21 crc kubenswrapper[4910]: I0226 22:11:21.451514 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx" event={"ID":"bb1bab0a-7f00-4ce2-9c96-5a5581cf3b89","Type":"ContainerStarted","Data":"1a09dc36eb496971bc9f086f2238ddc4d641843bc8dc2a3db772e31dedf46545"} Feb 26 22:11:23 crc kubenswrapper[4910]: I0226 22:11:23.471648 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx" event={"ID":"bb1bab0a-7f00-4ce2-9c96-5a5581cf3b89","Type":"ContainerStarted","Data":"ae1370172af6960de36170d4391ce911f68342e22a513a9c3dbe2c5d38da39d6"} Feb 26 22:11:23 crc kubenswrapper[4910]: I0226 22:11:23.502988 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-rxpwx" podStartSLOduration=1.255176735 podStartE2EDuration="3.502968443s" podCreationTimestamp="2026-02-26 22:11:20 +0000 UTC" firstStartedPulling="2026-02-26 22:11:21.01227022 +0000 UTC m=+966.091760761" lastFinishedPulling="2026-02-26 22:11:23.260061928 +0000 UTC m=+968.339552469" observedRunningTime="2026-02-26 22:11:23.498134483 +0000 UTC m=+968.577625034" watchObservedRunningTime="2026-02-26 22:11:23.502968443 +0000 UTC m=+968.582459004" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.571784 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-hjdg5"] Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.575281 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-hjdg5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.581138 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-hjdg5"] Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.581483 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mfzmg" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.589924 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-g57p5"] Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.591155 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.593404 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.635435 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7c5lt"] Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.636417 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.643669 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-g57p5"] Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.743251 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x"] Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.743945 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.744840 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/da33d221-fd03-423a-a7ac-6f74130cb62a-nmstate-lock\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.744910 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/29f7e558-2f35-4fbe-b29a-c1f04082478c-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-g57p5\" (UID: \"29f7e558-2f35-4fbe-b29a-c1f04082478c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.744943 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv48h\" (UniqueName: \"kubernetes.io/projected/29f7e558-2f35-4fbe-b29a-c1f04082478c-kube-api-access-pv48h\") pod \"nmstate-webhook-786f45cff4-g57p5\" (UID: \"29f7e558-2f35-4fbe-b29a-c1f04082478c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.744979 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztwm2\" (UniqueName: \"kubernetes.io/projected/da33d221-fd03-423a-a7ac-6f74130cb62a-kube-api-access-ztwm2\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.744998 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft26r\" (UniqueName: \"kubernetes.io/projected/990d9775-20ab-4a75-b035-14be139e1c86-kube-api-access-ft26r\") pod \"nmstate-metrics-69594cc75-hjdg5\" (UID: \"990d9775-20ab-4a75-b035-14be139e1c86\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-hjdg5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.745064 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/da33d221-fd03-423a-a7ac-6f74130cb62a-ovs-socket\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.745100 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/da33d221-fd03-423a-a7ac-6f74130cb62a-dbus-socket\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.747119 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.748405 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.748563 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-txdw2" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.751347 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x"] Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851043 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ebece5c-93ed-4dc8-bbaf-53682b6f95a5-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-5z45x\" (UID: \"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851124 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/da33d221-fd03-423a-a7ac-6f74130cb62a-nmstate-lock\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851176 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/29f7e558-2f35-4fbe-b29a-c1f04082478c-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-g57p5\" (UID: \"29f7e558-2f35-4fbe-b29a-c1f04082478c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851205 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwpmk\" (UniqueName: \"kubernetes.io/projected/4ebece5c-93ed-4dc8-bbaf-53682b6f95a5-kube-api-access-kwpmk\") pod \"nmstate-console-plugin-5dcbbd79cf-5z45x\" (UID: \"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851221 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/da33d221-fd03-423a-a7ac-6f74130cb62a-nmstate-lock\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851232 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv48h\" (UniqueName: \"kubernetes.io/projected/29f7e558-2f35-4fbe-b29a-c1f04082478c-kube-api-access-pv48h\") pod \"nmstate-webhook-786f45cff4-g57p5\" (UID: \"29f7e558-2f35-4fbe-b29a-c1f04082478c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851298 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztwm2\" (UniqueName: \"kubernetes.io/projected/da33d221-fd03-423a-a7ac-6f74130cb62a-kube-api-access-ztwm2\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851321 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft26r\" (UniqueName: \"kubernetes.io/projected/990d9775-20ab-4a75-b035-14be139e1c86-kube-api-access-ft26r\") pod \"nmstate-metrics-69594cc75-hjdg5\" (UID: \"990d9775-20ab-4a75-b035-14be139e1c86\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-hjdg5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851350 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ebece5c-93ed-4dc8-bbaf-53682b6f95a5-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-5z45x\" (UID: \"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851373 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/da33d221-fd03-423a-a7ac-6f74130cb62a-ovs-socket\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851399 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/da33d221-fd03-423a-a7ac-6f74130cb62a-dbus-socket\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851603 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/da33d221-fd03-423a-a7ac-6f74130cb62a-dbus-socket\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.851646 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/da33d221-fd03-423a-a7ac-6f74130cb62a-ovs-socket\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.856546 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/29f7e558-2f35-4fbe-b29a-c1f04082478c-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-g57p5\" (UID: \"29f7e558-2f35-4fbe-b29a-c1f04082478c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.867085 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft26r\" (UniqueName: \"kubernetes.io/projected/990d9775-20ab-4a75-b035-14be139e1c86-kube-api-access-ft26r\") pod \"nmstate-metrics-69594cc75-hjdg5\" (UID: \"990d9775-20ab-4a75-b035-14be139e1c86\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-hjdg5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.870082 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv48h\" (UniqueName: \"kubernetes.io/projected/29f7e558-2f35-4fbe-b29a-c1f04082478c-kube-api-access-pv48h\") pod \"nmstate-webhook-786f45cff4-g57p5\" (UID: \"29f7e558-2f35-4fbe-b29a-c1f04082478c\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.877110 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztwm2\" (UniqueName: \"kubernetes.io/projected/da33d221-fd03-423a-a7ac-6f74130cb62a-kube-api-access-ztwm2\") pod \"nmstate-handler-7c5lt\" (UID: \"da33d221-fd03-423a-a7ac-6f74130cb62a\") " pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.897242 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-hjdg5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.907329 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.950476 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.952496 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwpmk\" (UniqueName: \"kubernetes.io/projected/4ebece5c-93ed-4dc8-bbaf-53682b6f95a5-kube-api-access-kwpmk\") pod \"nmstate-console-plugin-5dcbbd79cf-5z45x\" (UID: \"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.952567 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ebece5c-93ed-4dc8-bbaf-53682b6f95a5-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-5z45x\" (UID: \"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.952617 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ebece5c-93ed-4dc8-bbaf-53682b6f95a5-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-5z45x\" (UID: \"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.954889 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ebece5c-93ed-4dc8-bbaf-53682b6f95a5-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-5z45x\" (UID: \"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.956325 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-665565fd55-thtlc"] Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.957111 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.958686 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ebece5c-93ed-4dc8-bbaf-53682b6f95a5-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-5z45x\" (UID: \"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.971474 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-665565fd55-thtlc"] Feb 26 22:11:24 crc kubenswrapper[4910]: I0226 22:11:24.981599 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwpmk\" (UniqueName: \"kubernetes.io/projected/4ebece5c-93ed-4dc8-bbaf-53682b6f95a5-kube-api-access-kwpmk\") pod \"nmstate-console-plugin-5dcbbd79cf-5z45x\" (UID: \"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:25 crc kubenswrapper[4910]: W0226 22:11:25.002625 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda33d221_fd03_423a_a7ac_6f74130cb62a.slice/crio-83e29e66eb229db2a0f525aecd4ece6c8caab6cb28036af55748b69f3cbdbcfd WatchSource:0}: Error finding container 83e29e66eb229db2a0f525aecd4ece6c8caab6cb28036af55748b69f3cbdbcfd: Status 404 returned error can't find the container with id 83e29e66eb229db2a0f525aecd4ece6c8caab6cb28036af55748b69f3cbdbcfd Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.061613 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.155300 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-oauth-serving-cert\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.155357 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-console-config\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.155378 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/728b39d9-f048-4850-aaa5-064d02eadfe8-console-serving-cert\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.155405 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-trusted-ca-bundle\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.155422 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/728b39d9-f048-4850-aaa5-064d02eadfe8-console-oauth-config\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.155461 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4ch\" (UniqueName: \"kubernetes.io/projected/728b39d9-f048-4850-aaa5-064d02eadfe8-kube-api-access-ng4ch\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.155493 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-service-ca\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.177659 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-hjdg5"] Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.190738 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-g57p5"] Feb 26 22:11:25 crc kubenswrapper[4910]: W0226 22:11:25.195356 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f7e558_2f35_4fbe_b29a_c1f04082478c.slice/crio-61280aad5bbf682cb300281c2b9dcc604b8273d5bc82ee57e30db0f9a360768e WatchSource:0}: Error finding container 61280aad5bbf682cb300281c2b9dcc604b8273d5bc82ee57e30db0f9a360768e: Status 404 returned error can't find the container with id 61280aad5bbf682cb300281c2b9dcc604b8273d5bc82ee57e30db0f9a360768e Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.258343 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-service-ca\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.258395 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-oauth-serving-cert\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.258434 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-console-config\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.258455 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/728b39d9-f048-4850-aaa5-064d02eadfe8-console-serving-cert\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.258478 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-trusted-ca-bundle\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.258498 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/728b39d9-f048-4850-aaa5-064d02eadfe8-console-oauth-config\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.258542 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4ch\" (UniqueName: \"kubernetes.io/projected/728b39d9-f048-4850-aaa5-064d02eadfe8-kube-api-access-ng4ch\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.259323 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-service-ca\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.259377 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-console-config\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.260058 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-oauth-serving-cert\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.260385 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/728b39d9-f048-4850-aaa5-064d02eadfe8-trusted-ca-bundle\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.262269 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/728b39d9-f048-4850-aaa5-064d02eadfe8-console-serving-cert\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.262336 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/728b39d9-f048-4850-aaa5-064d02eadfe8-console-oauth-config\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.272361 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4ch\" (UniqueName: \"kubernetes.io/projected/728b39d9-f048-4850-aaa5-064d02eadfe8-kube-api-access-ng4ch\") pod \"console-665565fd55-thtlc\" (UID: \"728b39d9-f048-4850-aaa5-064d02eadfe8\") " pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.297192 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.473450 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x"] Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.493908 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-hjdg5" event={"ID":"990d9775-20ab-4a75-b035-14be139e1c86","Type":"ContainerStarted","Data":"34e653654693ff02dfb355e336e93561a675e86a9a31ef90495abb9690e4deee"} Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.494993 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" event={"ID":"29f7e558-2f35-4fbe-b29a-c1f04082478c","Type":"ContainerStarted","Data":"61280aad5bbf682cb300281c2b9dcc604b8273d5bc82ee57e30db0f9a360768e"} Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.497576 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7c5lt" event={"ID":"da33d221-fd03-423a-a7ac-6f74130cb62a","Type":"ContainerStarted","Data":"83e29e66eb229db2a0f525aecd4ece6c8caab6cb28036af55748b69f3cbdbcfd"} Feb 26 22:11:25 crc kubenswrapper[4910]: W0226 22:11:25.497663 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ebece5c_93ed_4dc8_bbaf_53682b6f95a5.slice/crio-5cb4c0475bd457ee591ca4f6e89f6cc94f7cf4cc7fb74b77c9d5ee1d602f814e WatchSource:0}: Error finding container 5cb4c0475bd457ee591ca4f6e89f6cc94f7cf4cc7fb74b77c9d5ee1d602f814e: Status 404 returned error can't find the container with id 5cb4c0475bd457ee591ca4f6e89f6cc94f7cf4cc7fb74b77c9d5ee1d602f814e Feb 26 22:11:25 crc kubenswrapper[4910]: I0226 22:11:25.548716 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-665565fd55-thtlc"] Feb 26 22:11:25 crc kubenswrapper[4910]: W0226 22:11:25.553486 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728b39d9_f048_4850_aaa5_064d02eadfe8.slice/crio-4b8efecc638b236eb101a90d9b9046f869b5fc3a9ea858b029405351369b954d WatchSource:0}: Error finding container 4b8efecc638b236eb101a90d9b9046f869b5fc3a9ea858b029405351369b954d: Status 404 returned error can't find the container with id 4b8efecc638b236eb101a90d9b9046f869b5fc3a9ea858b029405351369b954d Feb 26 22:11:26 crc kubenswrapper[4910]: I0226 22:11:26.512760 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" event={"ID":"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5","Type":"ContainerStarted","Data":"5cb4c0475bd457ee591ca4f6e89f6cc94f7cf4cc7fb74b77c9d5ee1d602f814e"} Feb 26 22:11:26 crc kubenswrapper[4910]: I0226 22:11:26.516022 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-665565fd55-thtlc" event={"ID":"728b39d9-f048-4850-aaa5-064d02eadfe8","Type":"ContainerStarted","Data":"84459fc50175ec1fcece6092946d5b76205c13d0a3a418c859c7c4b2e7cf6050"} Feb 26 22:11:26 crc kubenswrapper[4910]: I0226 22:11:26.516100 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-665565fd55-thtlc" event={"ID":"728b39d9-f048-4850-aaa5-064d02eadfe8","Type":"ContainerStarted","Data":"4b8efecc638b236eb101a90d9b9046f869b5fc3a9ea858b029405351369b954d"} Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.555560 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" event={"ID":"4ebece5c-93ed-4dc8-bbaf-53682b6f95a5","Type":"ContainerStarted","Data":"6ff663c16cd2960a874bec781d03b37d6d946199ad732cd25c8f0a502f12eece"} Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.558742 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7c5lt" event={"ID":"da33d221-fd03-423a-a7ac-6f74130cb62a","Type":"ContainerStarted","Data":"7f89565dccc0c23e4b0466d8c96722dd240ce507441a01d67267c00b85bab235"} Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.558905 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.560873 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-hjdg5" event={"ID":"990d9775-20ab-4a75-b035-14be139e1c86","Type":"ContainerStarted","Data":"5f8aeef1eac31a036c8931c296f0879ba902e6559e043176f8bc35f04608636c"} Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.563611 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" event={"ID":"29f7e558-2f35-4fbe-b29a-c1f04082478c","Type":"ContainerStarted","Data":"900104949c2537644a30119a76276c79b2aff6fa6c133410000f6ab7dd902d4c"} Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.563795 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.576740 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-665565fd55-thtlc" podStartSLOduration=5.576721681 podStartE2EDuration="5.576721681s" podCreationTimestamp="2026-02-26 22:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:11:26.551010803 +0000 UTC m=+971.630501384" watchObservedRunningTime="2026-02-26 22:11:29.576721681 +0000 UTC m=+974.656212232" Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.584936 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-5z45x" podStartSLOduration=2.612305285 podStartE2EDuration="5.584914923s" podCreationTimestamp="2026-02-26 22:11:24 +0000 UTC" firstStartedPulling="2026-02-26 22:11:25.500908107 +0000 UTC m=+970.580398648" lastFinishedPulling="2026-02-26 22:11:28.473517745 +0000 UTC m=+973.553008286" observedRunningTime="2026-02-26 22:11:29.575377115 +0000 UTC m=+974.654867686" watchObservedRunningTime="2026-02-26 22:11:29.584914923 +0000 UTC m=+974.664405474" Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.604565 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" podStartSLOduration=2.33027741 podStartE2EDuration="5.604520434s" podCreationTimestamp="2026-02-26 22:11:24 +0000 UTC" firstStartedPulling="2026-02-26 22:11:25.198611554 +0000 UTC m=+970.278102095" lastFinishedPulling="2026-02-26 22:11:28.472854578 +0000 UTC m=+973.552345119" observedRunningTime="2026-02-26 22:11:29.602352445 +0000 UTC m=+974.681843066" watchObservedRunningTime="2026-02-26 22:11:29.604520434 +0000 UTC m=+974.684010975" Feb 26 22:11:29 crc kubenswrapper[4910]: I0226 22:11:29.623378 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7c5lt" podStartSLOduration=2.146531377 podStartE2EDuration="5.623350223s" podCreationTimestamp="2026-02-26 22:11:24 +0000 UTC" firstStartedPulling="2026-02-26 22:11:25.004781668 +0000 UTC m=+970.084272209" lastFinishedPulling="2026-02-26 22:11:28.481600514 +0000 UTC m=+973.561091055" observedRunningTime="2026-02-26 22:11:29.618606165 +0000 UTC m=+974.698096736" watchObservedRunningTime="2026-02-26 22:11:29.623350223 +0000 UTC m=+974.702840784" Feb 26 22:11:32 crc kubenswrapper[4910]: I0226 22:11:32.587391 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-hjdg5" event={"ID":"990d9775-20ab-4a75-b035-14be139e1c86","Type":"ContainerStarted","Data":"1cb0c9f6b7240dd28b0cf39a7097f0c16b062d6f84ea9667d1b17e4e6162a88b"} Feb 26 22:11:32 crc kubenswrapper[4910]: I0226 22:11:32.616037 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-hjdg5" podStartSLOduration=2.2300621290000002 podStartE2EDuration="8.616006583s" podCreationTimestamp="2026-02-26 22:11:24 +0000 UTC" firstStartedPulling="2026-02-26 22:11:25.188294315 +0000 UTC m=+970.267784856" lastFinishedPulling="2026-02-26 22:11:31.574238739 +0000 UTC m=+976.653729310" observedRunningTime="2026-02-26 22:11:32.610697859 +0000 UTC m=+977.690188440" watchObservedRunningTime="2026-02-26 22:11:32.616006583 +0000 UTC m=+977.695497164" Feb 26 22:11:34 crc kubenswrapper[4910]: I0226 22:11:34.991441 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7c5lt" Feb 26 22:11:35 crc kubenswrapper[4910]: I0226 22:11:35.298020 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:35 crc kubenswrapper[4910]: I0226 22:11:35.298083 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:35 crc kubenswrapper[4910]: I0226 22:11:35.306048 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:35 crc kubenswrapper[4910]: I0226 22:11:35.620256 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-665565fd55-thtlc" Feb 26 22:11:35 crc kubenswrapper[4910]: I0226 22:11:35.711065 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kj9s2"] Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.526261 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9n7r"] Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.528085 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.557544 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9n7r"] Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.651042 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-catalog-content\") pod \"certified-operators-l9n7r\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.651135 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqfpf\" (UniqueName: \"kubernetes.io/projected/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-kube-api-access-xqfpf\") pod \"certified-operators-l9n7r\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.651217 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-utilities\") pod \"certified-operators-l9n7r\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.753149 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-utilities\") pod \"certified-operators-l9n7r\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.753341 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-catalog-content\") pod \"certified-operators-l9n7r\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.753403 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqfpf\" (UniqueName: \"kubernetes.io/projected/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-kube-api-access-xqfpf\") pod \"certified-operators-l9n7r\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.753964 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-utilities\") pod \"certified-operators-l9n7r\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.754025 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-catalog-content\") pod \"certified-operators-l9n7r\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.782318 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqfpf\" (UniqueName: \"kubernetes.io/projected/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-kube-api-access-xqfpf\") pod \"certified-operators-l9n7r\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:37 crc kubenswrapper[4910]: I0226 22:11:37.860933 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:38 crc kubenswrapper[4910]: I0226 22:11:38.326532 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9n7r"] Feb 26 22:11:38 crc kubenswrapper[4910]: I0226 22:11:38.642790 4910 generic.go:334] "Generic (PLEG): container finished" podID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerID="75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1" exitCode=0 Feb 26 22:11:38 crc kubenswrapper[4910]: I0226 22:11:38.642892 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9n7r" event={"ID":"c20dca05-e6f7-4539-9eb8-cd9dbba8f731","Type":"ContainerDied","Data":"75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1"} Feb 26 22:11:38 crc kubenswrapper[4910]: I0226 22:11:38.643268 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9n7r" event={"ID":"c20dca05-e6f7-4539-9eb8-cd9dbba8f731","Type":"ContainerStarted","Data":"c03d0b225e63fcf9e5211c3fe9a490083680fdea83605bb87a82f3e6faf55d99"} Feb 26 22:11:39 crc kubenswrapper[4910]: I0226 22:11:39.658073 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9n7r" event={"ID":"c20dca05-e6f7-4539-9eb8-cd9dbba8f731","Type":"ContainerStarted","Data":"a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0"} Feb 26 22:11:40 crc kubenswrapper[4910]: I0226 22:11:40.671451 4910 generic.go:334] "Generic (PLEG): container finished" podID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerID="a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0" exitCode=0 Feb 26 22:11:40 crc kubenswrapper[4910]: I0226 22:11:40.671530 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9n7r" event={"ID":"c20dca05-e6f7-4539-9eb8-cd9dbba8f731","Type":"ContainerDied","Data":"a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0"} Feb 26 22:11:41 crc kubenswrapper[4910]: I0226 22:11:41.682515 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9n7r" event={"ID":"c20dca05-e6f7-4539-9eb8-cd9dbba8f731","Type":"ContainerStarted","Data":"6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e"} Feb 26 22:11:41 crc kubenswrapper[4910]: I0226 22:11:41.714968 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9n7r" podStartSLOduration=2.25814564 podStartE2EDuration="4.714949502s" podCreationTimestamp="2026-02-26 22:11:37 +0000 UTC" firstStartedPulling="2026-02-26 22:11:38.645234128 +0000 UTC m=+983.724724699" lastFinishedPulling="2026-02-26 22:11:41.10203798 +0000 UTC m=+986.181528561" observedRunningTime="2026-02-26 22:11:41.713428101 +0000 UTC m=+986.792918682" watchObservedRunningTime="2026-02-26 22:11:41.714949502 +0000 UTC m=+986.794440053" Feb 26 22:11:44 crc kubenswrapper[4910]: I0226 22:11:44.916653 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-g57p5" Feb 26 22:11:47 crc kubenswrapper[4910]: I0226 22:11:47.861074 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:47 crc kubenswrapper[4910]: I0226 22:11:47.861130 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:47 crc kubenswrapper[4910]: I0226 22:11:47.920540 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:48 crc kubenswrapper[4910]: I0226 22:11:48.796583 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:48 crc kubenswrapper[4910]: I0226 22:11:48.849738 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9n7r"] Feb 26 22:11:50 crc kubenswrapper[4910]: I0226 22:11:50.751471 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l9n7r" podUID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerName="registry-server" containerID="cri-o://6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e" gracePeriod=2 Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.208426 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.362727 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-catalog-content\") pod \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.362924 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-utilities\") pod \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.363010 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqfpf\" (UniqueName: \"kubernetes.io/projected/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-kube-api-access-xqfpf\") pod \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\" (UID: \"c20dca05-e6f7-4539-9eb8-cd9dbba8f731\") " Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.366445 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-utilities" (OuterVolumeSpecName: "utilities") pod "c20dca05-e6f7-4539-9eb8-cd9dbba8f731" (UID: "c20dca05-e6f7-4539-9eb8-cd9dbba8f731"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.372418 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-kube-api-access-xqfpf" (OuterVolumeSpecName: "kube-api-access-xqfpf") pod "c20dca05-e6f7-4539-9eb8-cd9dbba8f731" (UID: "c20dca05-e6f7-4539-9eb8-cd9dbba8f731"). InnerVolumeSpecName "kube-api-access-xqfpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.423479 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c20dca05-e6f7-4539-9eb8-cd9dbba8f731" (UID: "c20dca05-e6f7-4539-9eb8-cd9dbba8f731"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.465974 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.466042 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.466070 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqfpf\" (UniqueName: \"kubernetes.io/projected/c20dca05-e6f7-4539-9eb8-cd9dbba8f731-kube-api-access-xqfpf\") on node \"crc\" DevicePath \"\"" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.763352 4910 generic.go:334] "Generic (PLEG): container finished" podID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerID="6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e" exitCode=0 Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.763410 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9n7r" event={"ID":"c20dca05-e6f7-4539-9eb8-cd9dbba8f731","Type":"ContainerDied","Data":"6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e"} Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.763440 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9n7r" event={"ID":"c20dca05-e6f7-4539-9eb8-cd9dbba8f731","Type":"ContainerDied","Data":"c03d0b225e63fcf9e5211c3fe9a490083680fdea83605bb87a82f3e6faf55d99"} Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.763464 4910 scope.go:117] "RemoveContainer" containerID="6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.763468 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9n7r" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.783781 4910 scope.go:117] "RemoveContainer" containerID="a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.809339 4910 scope.go:117] "RemoveContainer" containerID="75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.819812 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9n7r"] Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.831675 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l9n7r"] Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.842123 4910 scope.go:117] "RemoveContainer" containerID="6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e" Feb 26 22:11:51 crc kubenswrapper[4910]: E0226 22:11:51.848358 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e\": container with ID starting with 6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e not found: ID does not exist" containerID="6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.848407 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e"} err="failed to get container status \"6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e\": rpc error: code = NotFound desc = could not find container \"6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e\": container with ID starting with 6f31ed051b731618e5555e7c4597b7b80f30cff50b9964c5a40bcf463236d88e not found: ID does not exist" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.848430 4910 scope.go:117] "RemoveContainer" containerID="a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0" Feb 26 22:11:51 crc kubenswrapper[4910]: E0226 22:11:51.848835 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0\": container with ID starting with a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0 not found: ID does not exist" containerID="a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.848867 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0"} err="failed to get container status \"a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0\": rpc error: code = NotFound desc = could not find container \"a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0\": container with ID starting with a5bf6bd7950ca4dea54a6fdfefb041f896bb68a8cbef60c2864c0bfc818945a0 not found: ID does not exist" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.848899 4910 scope.go:117] "RemoveContainer" containerID="75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1" Feb 26 22:11:51 crc kubenswrapper[4910]: E0226 22:11:51.849307 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1\": container with ID starting with 75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1 not found: ID does not exist" containerID="75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.849326 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1"} err="failed to get container status \"75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1\": rpc error: code = NotFound desc = could not find container \"75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1\": container with ID starting with 75864395721cb849969ddd876c51c2aa1c66b36f2e65b2548cbd5ee830b660a1 not found: ID does not exist" Feb 26 22:11:51 crc kubenswrapper[4910]: I0226 22:11:51.911026 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" path="/var/lib/kubelet/pods/c20dca05-e6f7-4539-9eb8-cd9dbba8f731/volumes" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.152403 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535732-dh8z5"] Feb 26 22:12:00 crc kubenswrapper[4910]: E0226 22:12:00.153123 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerName="extract-utilities" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.153135 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerName="extract-utilities" Feb 26 22:12:00 crc kubenswrapper[4910]: E0226 22:12:00.153147 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerName="extract-content" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.153152 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerName="extract-content" Feb 26 22:12:00 crc kubenswrapper[4910]: E0226 22:12:00.153190 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerName="registry-server" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.153197 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerName="registry-server" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.153290 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20dca05-e6f7-4539-9eb8-cd9dbba8f731" containerName="registry-server" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.153656 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535732-dh8z5" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.156274 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.156414 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.156525 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.161302 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535732-dh8z5"] Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.307405 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smgp7\" (UniqueName: \"kubernetes.io/projected/f31eeefb-61ab-4fdb-a4d2-8e25a18cf655-kube-api-access-smgp7\") pod \"auto-csr-approver-29535732-dh8z5\" (UID: \"f31eeefb-61ab-4fdb-a4d2-8e25a18cf655\") " pod="openshift-infra/auto-csr-approver-29535732-dh8z5" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.408653 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smgp7\" (UniqueName: \"kubernetes.io/projected/f31eeefb-61ab-4fdb-a4d2-8e25a18cf655-kube-api-access-smgp7\") pod \"auto-csr-approver-29535732-dh8z5\" (UID: \"f31eeefb-61ab-4fdb-a4d2-8e25a18cf655\") " pod="openshift-infra/auto-csr-approver-29535732-dh8z5" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.439875 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smgp7\" (UniqueName: \"kubernetes.io/projected/f31eeefb-61ab-4fdb-a4d2-8e25a18cf655-kube-api-access-smgp7\") pod \"auto-csr-approver-29535732-dh8z5\" (UID: \"f31eeefb-61ab-4fdb-a4d2-8e25a18cf655\") " pod="openshift-infra/auto-csr-approver-29535732-dh8z5" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.492594 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535732-dh8z5" Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.737687 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535732-dh8z5"] Feb 26 22:12:00 crc kubenswrapper[4910]: W0226 22:12:00.754862 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31eeefb_61ab_4fdb_a4d2_8e25a18cf655.slice/crio-90a58ffebe3fee3fa8cf518b6d888aa5968c24f1dd3b544ae20f2311e998284d WatchSource:0}: Error finding container 90a58ffebe3fee3fa8cf518b6d888aa5968c24f1dd3b544ae20f2311e998284d: Status 404 returned error can't find the container with id 90a58ffebe3fee3fa8cf518b6d888aa5968c24f1dd3b544ae20f2311e998284d Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.802018 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-kj9s2" podUID="acb5ada5-3567-4f1c-9130-1e78f3e88975" containerName="console" containerID="cri-o://71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273" gracePeriod=15 Feb 26 22:12:00 crc kubenswrapper[4910]: I0226 22:12:00.865360 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535732-dh8z5" event={"ID":"f31eeefb-61ab-4fdb-a4d2-8e25a18cf655","Type":"ContainerStarted","Data":"90a58ffebe3fee3fa8cf518b6d888aa5968c24f1dd3b544ae20f2311e998284d"} Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.174706 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kj9s2_acb5ada5-3567-4f1c-9130-1e78f3e88975/console/0.log" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.174770 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.329582 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-trusted-ca-bundle\") pod \"acb5ada5-3567-4f1c-9130-1e78f3e88975\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.330788 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-config\") pod \"acb5ada5-3567-4f1c-9130-1e78f3e88975\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.330870 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-oauth-serving-cert\") pod \"acb5ada5-3567-4f1c-9130-1e78f3e88975\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.330933 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-service-ca\") pod \"acb5ada5-3567-4f1c-9130-1e78f3e88975\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.330986 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "acb5ada5-3567-4f1c-9130-1e78f3e88975" (UID: "acb5ada5-3567-4f1c-9130-1e78f3e88975"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.331010 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gw6g\" (UniqueName: \"kubernetes.io/projected/acb5ada5-3567-4f1c-9130-1e78f3e88975-kube-api-access-2gw6g\") pod \"acb5ada5-3567-4f1c-9130-1e78f3e88975\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.331113 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-serving-cert\") pod \"acb5ada5-3567-4f1c-9130-1e78f3e88975\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.331294 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-oauth-config\") pod \"acb5ada5-3567-4f1c-9130-1e78f3e88975\" (UID: \"acb5ada5-3567-4f1c-9130-1e78f3e88975\") " Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.331373 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "acb5ada5-3567-4f1c-9130-1e78f3e88975" (UID: "acb5ada5-3567-4f1c-9130-1e78f3e88975"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.331930 4910 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.331954 4910 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.332175 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-service-ca" (OuterVolumeSpecName: "service-ca") pod "acb5ada5-3567-4f1c-9130-1e78f3e88975" (UID: "acb5ada5-3567-4f1c-9130-1e78f3e88975"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.333404 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-config" (OuterVolumeSpecName: "console-config") pod "acb5ada5-3567-4f1c-9130-1e78f3e88975" (UID: "acb5ada5-3567-4f1c-9130-1e78f3e88975"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.338941 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "acb5ada5-3567-4f1c-9130-1e78f3e88975" (UID: "acb5ada5-3567-4f1c-9130-1e78f3e88975"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.341443 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "acb5ada5-3567-4f1c-9130-1e78f3e88975" (UID: "acb5ada5-3567-4f1c-9130-1e78f3e88975"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.343424 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb5ada5-3567-4f1c-9130-1e78f3e88975-kube-api-access-2gw6g" (OuterVolumeSpecName: "kube-api-access-2gw6g") pod "acb5ada5-3567-4f1c-9130-1e78f3e88975" (UID: "acb5ada5-3567-4f1c-9130-1e78f3e88975"). InnerVolumeSpecName "kube-api-access-2gw6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.432763 4910 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.432800 4910 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.432812 4910 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acb5ada5-3567-4f1c-9130-1e78f3e88975-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.432824 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gw6g\" (UniqueName: \"kubernetes.io/projected/acb5ada5-3567-4f1c-9130-1e78f3e88975-kube-api-access-2gw6g\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.432837 4910 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acb5ada5-3567-4f1c-9130-1e78f3e88975-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.876434 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kj9s2_acb5ada5-3567-4f1c-9130-1e78f3e88975/console/0.log" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.876496 4910 generic.go:334] "Generic (PLEG): container finished" podID="acb5ada5-3567-4f1c-9130-1e78f3e88975" containerID="71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273" exitCode=2 Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.876536 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kj9s2" event={"ID":"acb5ada5-3567-4f1c-9130-1e78f3e88975","Type":"ContainerDied","Data":"71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273"} Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.876573 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kj9s2" event={"ID":"acb5ada5-3567-4f1c-9130-1e78f3e88975","Type":"ContainerDied","Data":"a988eb6a453f5fac8bb35b26b4360f3ba40aea0014cdba9f451eee44b2bff031"} Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.876596 4910 scope.go:117] "RemoveContainer" containerID="71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.876615 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kj9s2" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.902430 4910 scope.go:117] "RemoveContainer" containerID="71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273" Feb 26 22:12:01 crc kubenswrapper[4910]: E0226 22:12:01.902810 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273\": container with ID starting with 71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273 not found: ID does not exist" containerID="71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.902844 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273"} err="failed to get container status \"71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273\": rpc error: code = NotFound desc = could not find container \"71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273\": container with ID starting with 71b687959bba5697de0daf745020537dbaf93a38d6bd8d5c1ec80167d9411273 not found: ID does not exist" Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.926032 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kj9s2"] Feb 26 22:12:01 crc kubenswrapper[4910]: I0226 22:12:01.930266 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-kj9s2"] Feb 26 22:12:02 crc kubenswrapper[4910]: I0226 22:12:02.885488 4910 generic.go:334] "Generic (PLEG): container finished" podID="f31eeefb-61ab-4fdb-a4d2-8e25a18cf655" containerID="cb5edc064cd8aa3fa526549c52daaac569cbeda28f84ebe35b3c8ead0dc03fa0" exitCode=0 Feb 26 22:12:02 crc kubenswrapper[4910]: I0226 22:12:02.885594 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535732-dh8z5" event={"ID":"f31eeefb-61ab-4fdb-a4d2-8e25a18cf655","Type":"ContainerDied","Data":"cb5edc064cd8aa3fa526549c52daaac569cbeda28f84ebe35b3c8ead0dc03fa0"} Feb 26 22:12:02 crc kubenswrapper[4910]: I0226 22:12:02.968108 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q"] Feb 26 22:12:02 crc kubenswrapper[4910]: E0226 22:12:02.968349 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb5ada5-3567-4f1c-9130-1e78f3e88975" containerName="console" Feb 26 22:12:02 crc kubenswrapper[4910]: I0226 22:12:02.968364 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb5ada5-3567-4f1c-9130-1e78f3e88975" containerName="console" Feb 26 22:12:02 crc kubenswrapper[4910]: I0226 22:12:02.968471 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb5ada5-3567-4f1c-9130-1e78f3e88975" containerName="console" Feb 26 22:12:02 crc kubenswrapper[4910]: I0226 22:12:02.969266 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:02 crc kubenswrapper[4910]: I0226 22:12:02.972070 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 22:12:02 crc kubenswrapper[4910]: I0226 22:12:02.986788 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q"] Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.165756 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.165817 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.165869 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zh4j\" (UniqueName: \"kubernetes.io/projected/640ad168-330e-401c-8da5-650fbd1a8151-kube-api-access-6zh4j\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.267711 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zh4j\" (UniqueName: \"kubernetes.io/projected/640ad168-330e-401c-8da5-650fbd1a8151-kube-api-access-6zh4j\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.267791 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.267825 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.268598 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.268633 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.288908 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zh4j\" (UniqueName: \"kubernetes.io/projected/640ad168-330e-401c-8da5-650fbd1a8151-kube-api-access-6zh4j\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.581947 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:03 crc kubenswrapper[4910]: I0226 22:12:03.910366 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb5ada5-3567-4f1c-9130-1e78f3e88975" path="/var/lib/kubelet/pods/acb5ada5-3567-4f1c-9130-1e78f3e88975/volumes" Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.039037 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q"] Feb 26 22:12:04 crc kubenswrapper[4910]: W0226 22:12:04.044239 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod640ad168_330e_401c_8da5_650fbd1a8151.slice/crio-215a59b2818413648b839ca09cb299d7050877b6afe1d012c5c99d059fdd2b0b WatchSource:0}: Error finding container 215a59b2818413648b839ca09cb299d7050877b6afe1d012c5c99d059fdd2b0b: Status 404 returned error can't find the container with id 215a59b2818413648b839ca09cb299d7050877b6afe1d012c5c99d059fdd2b0b Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.219770 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535732-dh8z5" Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.297218 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smgp7\" (UniqueName: \"kubernetes.io/projected/f31eeefb-61ab-4fdb-a4d2-8e25a18cf655-kube-api-access-smgp7\") pod \"f31eeefb-61ab-4fdb-a4d2-8e25a18cf655\" (UID: \"f31eeefb-61ab-4fdb-a4d2-8e25a18cf655\") " Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.302231 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31eeefb-61ab-4fdb-a4d2-8e25a18cf655-kube-api-access-smgp7" (OuterVolumeSpecName: "kube-api-access-smgp7") pod "f31eeefb-61ab-4fdb-a4d2-8e25a18cf655" (UID: "f31eeefb-61ab-4fdb-a4d2-8e25a18cf655"). InnerVolumeSpecName "kube-api-access-smgp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.398848 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smgp7\" (UniqueName: \"kubernetes.io/projected/f31eeefb-61ab-4fdb-a4d2-8e25a18cf655-kube-api-access-smgp7\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.905103 4910 generic.go:334] "Generic (PLEG): container finished" podID="640ad168-330e-401c-8da5-650fbd1a8151" containerID="8ee58d0be2240f3444786064690bb5cf344060cd2d52ec487d2e6e8a01f12dcf" exitCode=0 Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.905271 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" event={"ID":"640ad168-330e-401c-8da5-650fbd1a8151","Type":"ContainerDied","Data":"8ee58d0be2240f3444786064690bb5cf344060cd2d52ec487d2e6e8a01f12dcf"} Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.905740 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" event={"ID":"640ad168-330e-401c-8da5-650fbd1a8151","Type":"ContainerStarted","Data":"215a59b2818413648b839ca09cb299d7050877b6afe1d012c5c99d059fdd2b0b"} Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.908694 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535732-dh8z5" event={"ID":"f31eeefb-61ab-4fdb-a4d2-8e25a18cf655","Type":"ContainerDied","Data":"90a58ffebe3fee3fa8cf518b6d888aa5968c24f1dd3b544ae20f2311e998284d"} Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.908739 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535732-dh8z5" Feb 26 22:12:04 crc kubenswrapper[4910]: I0226 22:12:04.908759 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a58ffebe3fee3fa8cf518b6d888aa5968c24f1dd3b544ae20f2311e998284d" Feb 26 22:12:05 crc kubenswrapper[4910]: I0226 22:12:05.296108 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535726-2qlhw"] Feb 26 22:12:05 crc kubenswrapper[4910]: I0226 22:12:05.305884 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535726-2qlhw"] Feb 26 22:12:05 crc kubenswrapper[4910]: I0226 22:12:05.914411 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bacf0b-95ed-4a36-a354-1cf17b887fcd" path="/var/lib/kubelet/pods/39bacf0b-95ed-4a36-a354-1cf17b887fcd/volumes" Feb 26 22:12:08 crc kubenswrapper[4910]: I0226 22:12:08.943020 4910 generic.go:334] "Generic (PLEG): container finished" podID="640ad168-330e-401c-8da5-650fbd1a8151" containerID="ea3c5af02c631b76003b62de1df031b3be374608a417c08b6bc5383cec216d77" exitCode=0 Feb 26 22:12:08 crc kubenswrapper[4910]: I0226 22:12:08.943100 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" event={"ID":"640ad168-330e-401c-8da5-650fbd1a8151","Type":"ContainerDied","Data":"ea3c5af02c631b76003b62de1df031b3be374608a417c08b6bc5383cec216d77"} Feb 26 22:12:09 crc kubenswrapper[4910]: I0226 22:12:09.954275 4910 generic.go:334] "Generic (PLEG): container finished" podID="640ad168-330e-401c-8da5-650fbd1a8151" containerID="f406c5e4389895757fbf50c442535ef9e2284a55fe6a573733f24018be5548e8" exitCode=0 Feb 26 22:12:09 crc kubenswrapper[4910]: I0226 22:12:09.954344 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" event={"ID":"640ad168-330e-401c-8da5-650fbd1a8151","Type":"ContainerDied","Data":"f406c5e4389895757fbf50c442535ef9e2284a55fe6a573733f24018be5548e8"} Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.330004 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.411054 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-util\") pod \"640ad168-330e-401c-8da5-650fbd1a8151\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.411146 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zh4j\" (UniqueName: \"kubernetes.io/projected/640ad168-330e-401c-8da5-650fbd1a8151-kube-api-access-6zh4j\") pod \"640ad168-330e-401c-8da5-650fbd1a8151\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.411239 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-bundle\") pod \"640ad168-330e-401c-8da5-650fbd1a8151\" (UID: \"640ad168-330e-401c-8da5-650fbd1a8151\") " Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.413450 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-bundle" (OuterVolumeSpecName: "bundle") pod "640ad168-330e-401c-8da5-650fbd1a8151" (UID: "640ad168-330e-401c-8da5-650fbd1a8151"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.416872 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/640ad168-330e-401c-8da5-650fbd1a8151-kube-api-access-6zh4j" (OuterVolumeSpecName: "kube-api-access-6zh4j") pod "640ad168-330e-401c-8da5-650fbd1a8151" (UID: "640ad168-330e-401c-8da5-650fbd1a8151"). InnerVolumeSpecName "kube-api-access-6zh4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.434448 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-util" (OuterVolumeSpecName: "util") pod "640ad168-330e-401c-8da5-650fbd1a8151" (UID: "640ad168-330e-401c-8da5-650fbd1a8151"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.512998 4910 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-util\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.513283 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zh4j\" (UniqueName: \"kubernetes.io/projected/640ad168-330e-401c-8da5-650fbd1a8151-kube-api-access-6zh4j\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.513298 4910 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/640ad168-330e-401c-8da5-650fbd1a8151-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.981185 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" event={"ID":"640ad168-330e-401c-8da5-650fbd1a8151","Type":"ContainerDied","Data":"215a59b2818413648b839ca09cb299d7050877b6afe1d012c5c99d059fdd2b0b"} Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.981230 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="215a59b2818413648b839ca09cb299d7050877b6afe1d012c5c99d059fdd2b0b" Feb 26 22:12:11 crc kubenswrapper[4910]: I0226 22:12:11.981306 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q" Feb 26 22:12:16 crc kubenswrapper[4910]: I0226 22:12:16.707492 4910 scope.go:117] "RemoveContainer" containerID="c4d9c64e1036b69d288e49e55594d2a3c40a1a80fbec88c170d2b72ba621bc2f" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.228715 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp"] Feb 26 22:12:20 crc kubenswrapper[4910]: E0226 22:12:20.229407 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="640ad168-330e-401c-8da5-650fbd1a8151" containerName="pull" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.229420 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="640ad168-330e-401c-8da5-650fbd1a8151" containerName="pull" Feb 26 22:12:20 crc kubenswrapper[4910]: E0226 22:12:20.229437 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="640ad168-330e-401c-8da5-650fbd1a8151" containerName="util" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.229443 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="640ad168-330e-401c-8da5-650fbd1a8151" containerName="util" Feb 26 22:12:20 crc kubenswrapper[4910]: E0226 22:12:20.229457 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="640ad168-330e-401c-8da5-650fbd1a8151" containerName="extract" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.229463 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="640ad168-330e-401c-8da5-650fbd1a8151" containerName="extract" Feb 26 22:12:20 crc kubenswrapper[4910]: E0226 22:12:20.229484 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31eeefb-61ab-4fdb-a4d2-8e25a18cf655" containerName="oc" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.229490 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31eeefb-61ab-4fdb-a4d2-8e25a18cf655" containerName="oc" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.229681 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="640ad168-330e-401c-8da5-650fbd1a8151" containerName="extract" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.229696 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31eeefb-61ab-4fdb-a4d2-8e25a18cf655" containerName="oc" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.231249 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.234951 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.235295 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.235328 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.236014 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qhmsb" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.236257 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.240056 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp"] Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.336919 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9c7d\" (UniqueName: \"kubernetes.io/projected/825755dc-b49b-4b20-b77e-02b0262bf8a6-kube-api-access-f9c7d\") pod \"metallb-operator-controller-manager-67dbfc649f-tk2gp\" (UID: \"825755dc-b49b-4b20-b77e-02b0262bf8a6\") " pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.336964 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/825755dc-b49b-4b20-b77e-02b0262bf8a6-apiservice-cert\") pod \"metallb-operator-controller-manager-67dbfc649f-tk2gp\" (UID: \"825755dc-b49b-4b20-b77e-02b0262bf8a6\") " pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.336991 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/825755dc-b49b-4b20-b77e-02b0262bf8a6-webhook-cert\") pod \"metallb-operator-controller-manager-67dbfc649f-tk2gp\" (UID: \"825755dc-b49b-4b20-b77e-02b0262bf8a6\") " pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.438638 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9c7d\" (UniqueName: \"kubernetes.io/projected/825755dc-b49b-4b20-b77e-02b0262bf8a6-kube-api-access-f9c7d\") pod \"metallb-operator-controller-manager-67dbfc649f-tk2gp\" (UID: \"825755dc-b49b-4b20-b77e-02b0262bf8a6\") " pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.438681 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/825755dc-b49b-4b20-b77e-02b0262bf8a6-apiservice-cert\") pod \"metallb-operator-controller-manager-67dbfc649f-tk2gp\" (UID: \"825755dc-b49b-4b20-b77e-02b0262bf8a6\") " pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.438706 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/825755dc-b49b-4b20-b77e-02b0262bf8a6-webhook-cert\") pod \"metallb-operator-controller-manager-67dbfc649f-tk2gp\" (UID: \"825755dc-b49b-4b20-b77e-02b0262bf8a6\") " pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.443528 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/825755dc-b49b-4b20-b77e-02b0262bf8a6-webhook-cert\") pod \"metallb-operator-controller-manager-67dbfc649f-tk2gp\" (UID: \"825755dc-b49b-4b20-b77e-02b0262bf8a6\") " pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.443957 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/825755dc-b49b-4b20-b77e-02b0262bf8a6-apiservice-cert\") pod \"metallb-operator-controller-manager-67dbfc649f-tk2gp\" (UID: \"825755dc-b49b-4b20-b77e-02b0262bf8a6\") " pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.444691 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn"] Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.445450 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.447194 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7c54r" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.448304 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.448391 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.456665 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9c7d\" (UniqueName: \"kubernetes.io/projected/825755dc-b49b-4b20-b77e-02b0262bf8a6-kube-api-access-f9c7d\") pod \"metallb-operator-controller-manager-67dbfc649f-tk2gp\" (UID: \"825755dc-b49b-4b20-b77e-02b0262bf8a6\") " pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.463124 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn"] Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.540005 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6fdcf51-21de-4c93-9730-a2eadb1dee56-apiservice-cert\") pod \"metallb-operator-webhook-server-574d8f9b84-4k7bn\" (UID: \"e6fdcf51-21de-4c93-9730-a2eadb1dee56\") " pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.540049 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6fdcf51-21de-4c93-9730-a2eadb1dee56-webhook-cert\") pod \"metallb-operator-webhook-server-574d8f9b84-4k7bn\" (UID: \"e6fdcf51-21de-4c93-9730-a2eadb1dee56\") " pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.540123 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsmr\" (UniqueName: \"kubernetes.io/projected/e6fdcf51-21de-4c93-9730-a2eadb1dee56-kube-api-access-vbsmr\") pod \"metallb-operator-webhook-server-574d8f9b84-4k7bn\" (UID: \"e6fdcf51-21de-4c93-9730-a2eadb1dee56\") " pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.597230 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.641298 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbsmr\" (UniqueName: \"kubernetes.io/projected/e6fdcf51-21de-4c93-9730-a2eadb1dee56-kube-api-access-vbsmr\") pod \"metallb-operator-webhook-server-574d8f9b84-4k7bn\" (UID: \"e6fdcf51-21de-4c93-9730-a2eadb1dee56\") " pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.641354 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6fdcf51-21de-4c93-9730-a2eadb1dee56-apiservice-cert\") pod \"metallb-operator-webhook-server-574d8f9b84-4k7bn\" (UID: \"e6fdcf51-21de-4c93-9730-a2eadb1dee56\") " pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.641375 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6fdcf51-21de-4c93-9730-a2eadb1dee56-webhook-cert\") pod \"metallb-operator-webhook-server-574d8f9b84-4k7bn\" (UID: \"e6fdcf51-21de-4c93-9730-a2eadb1dee56\") " pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.647299 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6fdcf51-21de-4c93-9730-a2eadb1dee56-apiservice-cert\") pod \"metallb-operator-webhook-server-574d8f9b84-4k7bn\" (UID: \"e6fdcf51-21de-4c93-9730-a2eadb1dee56\") " pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.647869 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6fdcf51-21de-4c93-9730-a2eadb1dee56-webhook-cert\") pod \"metallb-operator-webhook-server-574d8f9b84-4k7bn\" (UID: \"e6fdcf51-21de-4c93-9730-a2eadb1dee56\") " pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.656837 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbsmr\" (UniqueName: \"kubernetes.io/projected/e6fdcf51-21de-4c93-9730-a2eadb1dee56-kube-api-access-vbsmr\") pod \"metallb-operator-webhook-server-574d8f9b84-4k7bn\" (UID: \"e6fdcf51-21de-4c93-9730-a2eadb1dee56\") " pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:20 crc kubenswrapper[4910]: I0226 22:12:20.806220 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:21 crc kubenswrapper[4910]: I0226 22:12:21.026078 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp"] Feb 26 22:12:21 crc kubenswrapper[4910]: I0226 22:12:21.082468 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn"] Feb 26 22:12:21 crc kubenswrapper[4910]: W0226 22:12:21.090621 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6fdcf51_21de_4c93_9730_a2eadb1dee56.slice/crio-9be3111d542aac7a44d028b1d881557fe496ce77def97aaa645ea16617929671 WatchSource:0}: Error finding container 9be3111d542aac7a44d028b1d881557fe496ce77def97aaa645ea16617929671: Status 404 returned error can't find the container with id 9be3111d542aac7a44d028b1d881557fe496ce77def97aaa645ea16617929671 Feb 26 22:12:22 crc kubenswrapper[4910]: I0226 22:12:22.069327 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" event={"ID":"825755dc-b49b-4b20-b77e-02b0262bf8a6","Type":"ContainerStarted","Data":"7b20e6eb6e137e5b77f5b18f365cc9ee234f211adc3d3a4f15cd4ccc590bce92"} Feb 26 22:12:22 crc kubenswrapper[4910]: I0226 22:12:22.070558 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" event={"ID":"e6fdcf51-21de-4c93-9730-a2eadb1dee56","Type":"ContainerStarted","Data":"9be3111d542aac7a44d028b1d881557fe496ce77def97aaa645ea16617929671"} Feb 26 22:12:25 crc kubenswrapper[4910]: I0226 22:12:25.093987 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" event={"ID":"825755dc-b49b-4b20-b77e-02b0262bf8a6","Type":"ContainerStarted","Data":"ccb853de575089c92de5e68c472ecbe68506fb638ac3d006509f3153527058d0"} Feb 26 22:12:25 crc kubenswrapper[4910]: I0226 22:12:25.095284 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:12:25 crc kubenswrapper[4910]: I0226 22:12:25.128993 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" podStartSLOduration=1.80674469 podStartE2EDuration="5.128957561s" podCreationTimestamp="2026-02-26 22:12:20 +0000 UTC" firstStartedPulling="2026-02-26 22:12:21.06723197 +0000 UTC m=+1026.146722511" lastFinishedPulling="2026-02-26 22:12:24.389444831 +0000 UTC m=+1029.468935382" observedRunningTime="2026-02-26 22:12:25.119279859 +0000 UTC m=+1030.198770440" watchObservedRunningTime="2026-02-26 22:12:25.128957561 +0000 UTC m=+1030.208448182" Feb 26 22:12:29 crc kubenswrapper[4910]: I0226 22:12:29.130086 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" event={"ID":"e6fdcf51-21de-4c93-9730-a2eadb1dee56","Type":"ContainerStarted","Data":"a3802cbb045749d788b7451251650dda2dfd62da084a1af58582b86cdbd50f70"} Feb 26 22:12:29 crc kubenswrapper[4910]: I0226 22:12:29.130550 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:29 crc kubenswrapper[4910]: I0226 22:12:29.160330 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" podStartSLOduration=2.130570276 podStartE2EDuration="9.160307018s" podCreationTimestamp="2026-02-26 22:12:20 +0000 UTC" firstStartedPulling="2026-02-26 22:12:21.09678199 +0000 UTC m=+1026.176272531" lastFinishedPulling="2026-02-26 22:12:28.126518732 +0000 UTC m=+1033.206009273" observedRunningTime="2026-02-26 22:12:29.158101889 +0000 UTC m=+1034.237592470" watchObservedRunningTime="2026-02-26 22:12:29.160307018 +0000 UTC m=+1034.239797569" Feb 26 22:12:40 crc kubenswrapper[4910]: I0226 22:12:40.812663 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-574d8f9b84-4k7bn" Feb 26 22:12:42 crc kubenswrapper[4910]: I0226 22:12:42.960716 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ck9pm"] Feb 26 22:12:42 crc kubenswrapper[4910]: I0226 22:12:42.962027 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:42 crc kubenswrapper[4910]: I0226 22:12:42.982288 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ck9pm"] Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.113303 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-utilities\") pod \"community-operators-ck9pm\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.113363 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqc7x\" (UniqueName: \"kubernetes.io/projected/da851e84-0584-42a9-bbee-c3d1268e7017-kube-api-access-tqc7x\") pod \"community-operators-ck9pm\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.113417 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-catalog-content\") pod \"community-operators-ck9pm\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.214801 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-catalog-content\") pod \"community-operators-ck9pm\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.214879 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-utilities\") pod \"community-operators-ck9pm\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.214914 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqc7x\" (UniqueName: \"kubernetes.io/projected/da851e84-0584-42a9-bbee-c3d1268e7017-kube-api-access-tqc7x\") pod \"community-operators-ck9pm\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.215268 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-catalog-content\") pod \"community-operators-ck9pm\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.215414 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-utilities\") pod \"community-operators-ck9pm\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.245008 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqc7x\" (UniqueName: \"kubernetes.io/projected/da851e84-0584-42a9-bbee-c3d1268e7017-kube-api-access-tqc7x\") pod \"community-operators-ck9pm\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.285149 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:43 crc kubenswrapper[4910]: I0226 22:12:43.561068 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ck9pm"] Feb 26 22:12:44 crc kubenswrapper[4910]: I0226 22:12:44.241173 4910 generic.go:334] "Generic (PLEG): container finished" podID="da851e84-0584-42a9-bbee-c3d1268e7017" containerID="d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110" exitCode=0 Feb 26 22:12:44 crc kubenswrapper[4910]: I0226 22:12:44.241239 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck9pm" event={"ID":"da851e84-0584-42a9-bbee-c3d1268e7017","Type":"ContainerDied","Data":"d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110"} Feb 26 22:12:44 crc kubenswrapper[4910]: I0226 22:12:44.241501 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck9pm" event={"ID":"da851e84-0584-42a9-bbee-c3d1268e7017","Type":"ContainerStarted","Data":"517dd58ff31fddcba520c579f3fba4888973aa46864b21296df9cb7d4db38905"} Feb 26 22:12:46 crc kubenswrapper[4910]: I0226 22:12:46.258899 4910 generic.go:334] "Generic (PLEG): container finished" podID="da851e84-0584-42a9-bbee-c3d1268e7017" containerID="255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9" exitCode=0 Feb 26 22:12:46 crc kubenswrapper[4910]: I0226 22:12:46.258965 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck9pm" event={"ID":"da851e84-0584-42a9-bbee-c3d1268e7017","Type":"ContainerDied","Data":"255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9"} Feb 26 22:12:47 crc kubenswrapper[4910]: I0226 22:12:47.270013 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck9pm" event={"ID":"da851e84-0584-42a9-bbee-c3d1268e7017","Type":"ContainerStarted","Data":"ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372"} Feb 26 22:12:47 crc kubenswrapper[4910]: I0226 22:12:47.293910 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ck9pm" podStartSLOduration=2.880334386 podStartE2EDuration="5.293887147s" podCreationTimestamp="2026-02-26 22:12:42 +0000 UTC" firstStartedPulling="2026-02-26 22:12:44.24327472 +0000 UTC m=+1049.322765291" lastFinishedPulling="2026-02-26 22:12:46.656827501 +0000 UTC m=+1051.736318052" observedRunningTime="2026-02-26 22:12:47.286906988 +0000 UTC m=+1052.366397569" watchObservedRunningTime="2026-02-26 22:12:47.293887147 +0000 UTC m=+1052.373377728" Feb 26 22:12:53 crc kubenswrapper[4910]: I0226 22:12:53.286327 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:53 crc kubenswrapper[4910]: I0226 22:12:53.287047 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:53 crc kubenswrapper[4910]: I0226 22:12:53.332838 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:53 crc kubenswrapper[4910]: I0226 22:12:53.450215 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:53 crc kubenswrapper[4910]: I0226 22:12:53.592471 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ck9pm"] Feb 26 22:12:55 crc kubenswrapper[4910]: I0226 22:12:55.402094 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ck9pm" podUID="da851e84-0584-42a9-bbee-c3d1268e7017" containerName="registry-server" containerID="cri-o://ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372" gracePeriod=2 Feb 26 22:12:55 crc kubenswrapper[4910]: I0226 22:12:55.846206 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:55 crc kubenswrapper[4910]: I0226 22:12:55.987603 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqc7x\" (UniqueName: \"kubernetes.io/projected/da851e84-0584-42a9-bbee-c3d1268e7017-kube-api-access-tqc7x\") pod \"da851e84-0584-42a9-bbee-c3d1268e7017\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " Feb 26 22:12:55 crc kubenswrapper[4910]: I0226 22:12:55.987677 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-utilities\") pod \"da851e84-0584-42a9-bbee-c3d1268e7017\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " Feb 26 22:12:55 crc kubenswrapper[4910]: I0226 22:12:55.987759 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-catalog-content\") pod \"da851e84-0584-42a9-bbee-c3d1268e7017\" (UID: \"da851e84-0584-42a9-bbee-c3d1268e7017\") " Feb 26 22:12:55 crc kubenswrapper[4910]: I0226 22:12:55.989202 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-utilities" (OuterVolumeSpecName: "utilities") pod "da851e84-0584-42a9-bbee-c3d1268e7017" (UID: "da851e84-0584-42a9-bbee-c3d1268e7017"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:12:55 crc kubenswrapper[4910]: I0226 22:12:55.993999 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da851e84-0584-42a9-bbee-c3d1268e7017-kube-api-access-tqc7x" (OuterVolumeSpecName: "kube-api-access-tqc7x") pod "da851e84-0584-42a9-bbee-c3d1268e7017" (UID: "da851e84-0584-42a9-bbee-c3d1268e7017"). InnerVolumeSpecName "kube-api-access-tqc7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.089561 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqc7x\" (UniqueName: \"kubernetes.io/projected/da851e84-0584-42a9-bbee-c3d1268e7017-kube-api-access-tqc7x\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.089897 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.411537 4910 generic.go:334] "Generic (PLEG): container finished" podID="da851e84-0584-42a9-bbee-c3d1268e7017" containerID="ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372" exitCode=0 Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.411589 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck9pm" event={"ID":"da851e84-0584-42a9-bbee-c3d1268e7017","Type":"ContainerDied","Data":"ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372"} Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.411603 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ck9pm" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.411621 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ck9pm" event={"ID":"da851e84-0584-42a9-bbee-c3d1268e7017","Type":"ContainerDied","Data":"517dd58ff31fddcba520c579f3fba4888973aa46864b21296df9cb7d4db38905"} Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.411645 4910 scope.go:117] "RemoveContainer" containerID="ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.441342 4910 scope.go:117] "RemoveContainer" containerID="255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.462082 4910 scope.go:117] "RemoveContainer" containerID="d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.483675 4910 scope.go:117] "RemoveContainer" containerID="ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372" Feb 26 22:12:56 crc kubenswrapper[4910]: E0226 22:12:56.484315 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372\": container with ID starting with ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372 not found: ID does not exist" containerID="ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.484346 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372"} err="failed to get container status \"ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372\": rpc error: code = NotFound desc = could not find container \"ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372\": container with ID starting with ce1d4909f612e2769c7540df5851ed4969d2b61eae77557776503e90d797d372 not found: ID does not exist" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.484372 4910 scope.go:117] "RemoveContainer" containerID="255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9" Feb 26 22:12:56 crc kubenswrapper[4910]: E0226 22:12:56.489253 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9\": container with ID starting with 255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9 not found: ID does not exist" containerID="255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.489311 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9"} err="failed to get container status \"255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9\": rpc error: code = NotFound desc = could not find container \"255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9\": container with ID starting with 255549cb98e0147de675a6d1cdc639cee7125ae491d428023326f83074cb2aa9 not found: ID does not exist" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.489346 4910 scope.go:117] "RemoveContainer" containerID="d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110" Feb 26 22:12:56 crc kubenswrapper[4910]: E0226 22:12:56.490354 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110\": container with ID starting with d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110 not found: ID does not exist" containerID="d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.490624 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110"} err="failed to get container status \"d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110\": rpc error: code = NotFound desc = could not find container \"d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110\": container with ID starting with d0a1438de4c935ef37f1bf6d63bf3209d203ebbf80e5f35708fcc6f627f99110 not found: ID does not exist" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.508690 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da851e84-0584-42a9-bbee-c3d1268e7017" (UID: "da851e84-0584-42a9-bbee-c3d1268e7017"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.608940 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da851e84-0584-42a9-bbee-c3d1268e7017-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.735774 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ck9pm"] Feb 26 22:12:56 crc kubenswrapper[4910]: I0226 22:12:56.739675 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ck9pm"] Feb 26 22:12:57 crc kubenswrapper[4910]: I0226 22:12:57.909963 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da851e84-0584-42a9-bbee-c3d1268e7017" path="/var/lib/kubelet/pods/da851e84-0584-42a9-bbee-c3d1268e7017/volumes" Feb 26 22:13:00 crc kubenswrapper[4910]: I0226 22:13:00.601363 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67dbfc649f-tk2gp" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.396109 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb"] Feb 26 22:13:01 crc kubenswrapper[4910]: E0226 22:13:01.396469 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da851e84-0584-42a9-bbee-c3d1268e7017" containerName="registry-server" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.396492 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="da851e84-0584-42a9-bbee-c3d1268e7017" containerName="registry-server" Feb 26 22:13:01 crc kubenswrapper[4910]: E0226 22:13:01.396510 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da851e84-0584-42a9-bbee-c3d1268e7017" containerName="extract-utilities" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.396518 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="da851e84-0584-42a9-bbee-c3d1268e7017" containerName="extract-utilities" Feb 26 22:13:01 crc kubenswrapper[4910]: E0226 22:13:01.396535 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da851e84-0584-42a9-bbee-c3d1268e7017" containerName="extract-content" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.396544 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="da851e84-0584-42a9-bbee-c3d1268e7017" containerName="extract-content" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.396681 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="da851e84-0584-42a9-bbee-c3d1268e7017" containerName="registry-server" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.397270 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.401232 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nmgr7" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.401845 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-l59qt"] Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.404975 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.408273 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.412004 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb"] Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.412790 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.416459 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.485997 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-24r95"] Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.486951 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.494276 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.494498 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vpkf8" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.494521 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.494650 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.498706 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-bvlbc"] Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.499614 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.503494 4910 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.513586 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-bvlbc"] Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.518761 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz7bl\" (UniqueName: \"kubernetes.io/projected/75f0b1e1-c3be-4785-8c4f-fc063d622444-kube-api-access-bz7bl\") pod \"frr-k8s-webhook-server-7f989f654f-f57cb\" (UID: \"75f0b1e1-c3be-4785-8c4f-fc063d622444\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.518802 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-frr-sockets\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.518826 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bzqx\" (UniqueName: \"kubernetes.io/projected/a63789b4-9f3a-4ee0-ab34-8f79337060e2-kube-api-access-6bzqx\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.518848 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-frr-conf\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.518863 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-metrics\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.518882 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a63789b4-9f3a-4ee0-ab34-8f79337060e2-frr-startup\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.518901 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a63789b4-9f3a-4ee0-ab34-8f79337060e2-metrics-certs\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.518958 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-reloader\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.519001 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75f0b1e1-c3be-4785-8c4f-fc063d622444-cert\") pod \"frr-k8s-webhook-server-7f989f654f-f57cb\" (UID: \"75f0b1e1-c3be-4785-8c4f-fc063d622444\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620105 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz7bl\" (UniqueName: \"kubernetes.io/projected/75f0b1e1-c3be-4785-8c4f-fc063d622444-kube-api-access-bz7bl\") pod \"frr-k8s-webhook-server-7f989f654f-f57cb\" (UID: \"75f0b1e1-c3be-4785-8c4f-fc063d622444\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620150 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-frr-sockets\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620200 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzqx\" (UniqueName: \"kubernetes.io/projected/a63789b4-9f3a-4ee0-ab34-8f79337060e2-kube-api-access-6bzqx\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620221 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-frr-conf\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620237 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-metrics\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620276 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a63789b4-9f3a-4ee0-ab34-8f79337060e2-frr-startup\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620292 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a63789b4-9f3a-4ee0-ab34-8f79337060e2-metrics-certs\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620311 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1-cert\") pod \"controller-86ddb6bd46-bvlbc\" (UID: \"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1\") " pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620343 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcd76\" (UniqueName: \"kubernetes.io/projected/8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1-kube-api-access-bcd76\") pod \"controller-86ddb6bd46-bvlbc\" (UID: \"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1\") " pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620371 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/957ecb93-f3f6-4860-bea4-5977bf0ff619-metallb-excludel2\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620389 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-reloader\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620435 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q2cd\" (UniqueName: \"kubernetes.io/projected/957ecb93-f3f6-4860-bea4-5977bf0ff619-kube-api-access-6q2cd\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620457 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1-metrics-certs\") pod \"controller-86ddb6bd46-bvlbc\" (UID: \"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1\") " pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620480 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75f0b1e1-c3be-4785-8c4f-fc063d622444-cert\") pod \"frr-k8s-webhook-server-7f989f654f-f57cb\" (UID: \"75f0b1e1-c3be-4785-8c4f-fc063d622444\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620527 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-metrics-certs\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.620547 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-memberlist\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.621232 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-frr-sockets\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.621549 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-frr-conf\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: E0226 22:13:01.621614 4910 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 26 22:13:01 crc kubenswrapper[4910]: E0226 22:13:01.621653 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75f0b1e1-c3be-4785-8c4f-fc063d622444-cert podName:75f0b1e1-c3be-4785-8c4f-fc063d622444 nodeName:}" failed. No retries permitted until 2026-02-26 22:13:02.121639283 +0000 UTC m=+1067.201129824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/75f0b1e1-c3be-4785-8c4f-fc063d622444-cert") pod "frr-k8s-webhook-server-7f989f654f-f57cb" (UID: "75f0b1e1-c3be-4785-8c4f-fc063d622444") : secret "frr-k8s-webhook-server-cert" not found Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.621944 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-reloader\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.622098 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a63789b4-9f3a-4ee0-ab34-8f79337060e2-metrics\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.623064 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a63789b4-9f3a-4ee0-ab34-8f79337060e2-frr-startup\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.629197 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a63789b4-9f3a-4ee0-ab34-8f79337060e2-metrics-certs\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.644669 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz7bl\" (UniqueName: \"kubernetes.io/projected/75f0b1e1-c3be-4785-8c4f-fc063d622444-kube-api-access-bz7bl\") pod \"frr-k8s-webhook-server-7f989f654f-f57cb\" (UID: \"75f0b1e1-c3be-4785-8c4f-fc063d622444\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.650925 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bzqx\" (UniqueName: \"kubernetes.io/projected/a63789b4-9f3a-4ee0-ab34-8f79337060e2-kube-api-access-6bzqx\") pod \"frr-k8s-l59qt\" (UID: \"a63789b4-9f3a-4ee0-ab34-8f79337060e2\") " pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.722049 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q2cd\" (UniqueName: \"kubernetes.io/projected/957ecb93-f3f6-4860-bea4-5977bf0ff619-kube-api-access-6q2cd\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.722379 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1-metrics-certs\") pod \"controller-86ddb6bd46-bvlbc\" (UID: \"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1\") " pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.722522 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-metrics-certs\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.722603 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-memberlist\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.722711 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1-cert\") pod \"controller-86ddb6bd46-bvlbc\" (UID: \"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1\") " pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.722787 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcd76\" (UniqueName: \"kubernetes.io/projected/8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1-kube-api-access-bcd76\") pod \"controller-86ddb6bd46-bvlbc\" (UID: \"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1\") " pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.722908 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/957ecb93-f3f6-4860-bea4-5977bf0ff619-metallb-excludel2\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: E0226 22:13:01.722978 4910 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 22:13:01 crc kubenswrapper[4910]: E0226 22:13:01.723859 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-memberlist podName:957ecb93-f3f6-4860-bea4-5977bf0ff619 nodeName:}" failed. No retries permitted until 2026-02-26 22:13:02.22384378 +0000 UTC m=+1067.303334321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-memberlist") pod "speaker-24r95" (UID: "957ecb93-f3f6-4860-bea4-5977bf0ff619") : secret "metallb-memberlist" not found Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.723686 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/957ecb93-f3f6-4860-bea4-5977bf0ff619-metallb-excludel2\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.728557 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1-cert\") pod \"controller-86ddb6bd46-bvlbc\" (UID: \"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1\") " pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.729237 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1-metrics-certs\") pod \"controller-86ddb6bd46-bvlbc\" (UID: \"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1\") " pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.729651 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-metrics-certs\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.731754 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.738657 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcd76\" (UniqueName: \"kubernetes.io/projected/8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1-kube-api-access-bcd76\") pod \"controller-86ddb6bd46-bvlbc\" (UID: \"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1\") " pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.747514 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q2cd\" (UniqueName: \"kubernetes.io/projected/957ecb93-f3f6-4860-bea4-5977bf0ff619-kube-api-access-6q2cd\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:01 crc kubenswrapper[4910]: I0226 22:13:01.818509 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:02 crc kubenswrapper[4910]: I0226 22:13:02.128580 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75f0b1e1-c3be-4785-8c4f-fc063d622444-cert\") pod \"frr-k8s-webhook-server-7f989f654f-f57cb\" (UID: \"75f0b1e1-c3be-4785-8c4f-fc063d622444\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:02 crc kubenswrapper[4910]: I0226 22:13:02.134666 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75f0b1e1-c3be-4785-8c4f-fc063d622444-cert\") pod \"frr-k8s-webhook-server-7f989f654f-f57cb\" (UID: \"75f0b1e1-c3be-4785-8c4f-fc063d622444\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:02 crc kubenswrapper[4910]: I0226 22:13:02.230657 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-memberlist\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:02 crc kubenswrapper[4910]: E0226 22:13:02.230973 4910 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 22:13:02 crc kubenswrapper[4910]: E0226 22:13:02.231083 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-memberlist podName:957ecb93-f3f6-4860-bea4-5977bf0ff619 nodeName:}" failed. No retries permitted until 2026-02-26 22:13:03.231057681 +0000 UTC m=+1068.310548232 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-memberlist") pod "speaker-24r95" (UID: "957ecb93-f3f6-4860-bea4-5977bf0ff619") : secret "metallb-memberlist" not found Feb 26 22:13:02 crc kubenswrapper[4910]: I0226 22:13:02.295730 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-bvlbc"] Feb 26 22:13:02 crc kubenswrapper[4910]: I0226 22:13:02.321759 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:02 crc kubenswrapper[4910]: I0226 22:13:02.468957 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-bvlbc" event={"ID":"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1","Type":"ContainerStarted","Data":"441d8e632b8c325be8cdfb737b6f4c420fe7b1f3fcacab7053d413b5f844327b"} Feb 26 22:13:02 crc kubenswrapper[4910]: I0226 22:13:02.470012 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerStarted","Data":"af6ab4eb280c5e899877cfd4c58e80cad6438876426ed5d24bdbd23879acf0e0"} Feb 26 22:13:02 crc kubenswrapper[4910]: I0226 22:13:02.796003 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb"] Feb 26 22:13:02 crc kubenswrapper[4910]: W0226 22:13:02.812341 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75f0b1e1_c3be_4785_8c4f_fc063d622444.slice/crio-d42fa1f56b90f24c824d0554a58d11579f5c1f6f0f39e04d3f009620fa2918ca WatchSource:0}: Error finding container d42fa1f56b90f24c824d0554a58d11579f5c1f6f0f39e04d3f009620fa2918ca: Status 404 returned error can't find the container with id d42fa1f56b90f24c824d0554a58d11579f5c1f6f0f39e04d3f009620fa2918ca Feb 26 22:13:03 crc kubenswrapper[4910]: I0226 22:13:03.245077 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-memberlist\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:03 crc kubenswrapper[4910]: I0226 22:13:03.250477 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/957ecb93-f3f6-4860-bea4-5977bf0ff619-memberlist\") pod \"speaker-24r95\" (UID: \"957ecb93-f3f6-4860-bea4-5977bf0ff619\") " pod="metallb-system/speaker-24r95" Feb 26 22:13:03 crc kubenswrapper[4910]: I0226 22:13:03.309021 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-24r95" Feb 26 22:13:03 crc kubenswrapper[4910]: W0226 22:13:03.330353 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957ecb93_f3f6_4860_bea4_5977bf0ff619.slice/crio-909743ba0a46796a03d7cf67536a9b437b975fee37e8d95393abfb3c06c2d5d1 WatchSource:0}: Error finding container 909743ba0a46796a03d7cf67536a9b437b975fee37e8d95393abfb3c06c2d5d1: Status 404 returned error can't find the container with id 909743ba0a46796a03d7cf67536a9b437b975fee37e8d95393abfb3c06c2d5d1 Feb 26 22:13:03 crc kubenswrapper[4910]: I0226 22:13:03.479283 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-24r95" event={"ID":"957ecb93-f3f6-4860-bea4-5977bf0ff619","Type":"ContainerStarted","Data":"909743ba0a46796a03d7cf67536a9b437b975fee37e8d95393abfb3c06c2d5d1"} Feb 26 22:13:03 crc kubenswrapper[4910]: I0226 22:13:03.482391 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-bvlbc" event={"ID":"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1","Type":"ContainerStarted","Data":"12917284613d3ec6bbdcd73f515779541948b1b04f1a40822bcd9b1b5ac789eb"} Feb 26 22:13:03 crc kubenswrapper[4910]: I0226 22:13:03.482440 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-bvlbc" event={"ID":"8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1","Type":"ContainerStarted","Data":"8d8c31f6c240fa960126ae067503c7cd5018284fe71b5ab8b0ad0aa4b702332f"} Feb 26 22:13:03 crc kubenswrapper[4910]: I0226 22:13:03.482568 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:03 crc kubenswrapper[4910]: I0226 22:13:03.483990 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" event={"ID":"75f0b1e1-c3be-4785-8c4f-fc063d622444","Type":"ContainerStarted","Data":"d42fa1f56b90f24c824d0554a58d11579f5c1f6f0f39e04d3f009620fa2918ca"} Feb 26 22:13:03 crc kubenswrapper[4910]: I0226 22:13:03.513675 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-bvlbc" podStartSLOduration=2.5136284829999997 podStartE2EDuration="2.513628483s" podCreationTimestamp="2026-02-26 22:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:13:03.507752474 +0000 UTC m=+1068.587243055" watchObservedRunningTime="2026-02-26 22:13:03.513628483 +0000 UTC m=+1068.593119034" Feb 26 22:13:04 crc kubenswrapper[4910]: I0226 22:13:04.502817 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-24r95" event={"ID":"957ecb93-f3f6-4860-bea4-5977bf0ff619","Type":"ContainerStarted","Data":"5753a99785554841034834ddf77783a1ee28ef20ac466ddb7396ec066dfddae9"} Feb 26 22:13:04 crc kubenswrapper[4910]: I0226 22:13:04.502869 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-24r95" event={"ID":"957ecb93-f3f6-4860-bea4-5977bf0ff619","Type":"ContainerStarted","Data":"e22d2912ae6e2dc0c67f069fb44e10410b512b84d19f1a5170d09cffa3837e2f"} Feb 26 22:13:04 crc kubenswrapper[4910]: I0226 22:13:04.529544 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-24r95" podStartSLOduration=3.529521286 podStartE2EDuration="3.529521286s" podCreationTimestamp="2026-02-26 22:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:13:04.528494148 +0000 UTC m=+1069.607984689" watchObservedRunningTime="2026-02-26 22:13:04.529521286 +0000 UTC m=+1069.609011827" Feb 26 22:13:05 crc kubenswrapper[4910]: I0226 22:13:05.512493 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-24r95" Feb 26 22:13:10 crc kubenswrapper[4910]: I0226 22:13:10.569085 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" event={"ID":"75f0b1e1-c3be-4785-8c4f-fc063d622444","Type":"ContainerStarted","Data":"9b6c2c587707c9f5fc196a88975d0a43423e4bb8d219be5497151f3d6a61e414"} Feb 26 22:13:10 crc kubenswrapper[4910]: I0226 22:13:10.570139 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:10 crc kubenswrapper[4910]: I0226 22:13:10.574683 4910 generic.go:334] "Generic (PLEG): container finished" podID="a63789b4-9f3a-4ee0-ab34-8f79337060e2" containerID="458e48ef5e1b7425a958f81bb439ea67c775044ffc410d4e58f15398d9f67e1d" exitCode=0 Feb 26 22:13:10 crc kubenswrapper[4910]: I0226 22:13:10.574750 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerDied","Data":"458e48ef5e1b7425a958f81bb439ea67c775044ffc410d4e58f15398d9f67e1d"} Feb 26 22:13:10 crc kubenswrapper[4910]: I0226 22:13:10.594865 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" podStartSLOduration=2.672273417 podStartE2EDuration="9.594841027s" podCreationTimestamp="2026-02-26 22:13:01 +0000 UTC" firstStartedPulling="2026-02-26 22:13:02.819052579 +0000 UTC m=+1067.898543120" lastFinishedPulling="2026-02-26 22:13:09.741620149 +0000 UTC m=+1074.821110730" observedRunningTime="2026-02-26 22:13:10.594486308 +0000 UTC m=+1075.673976879" watchObservedRunningTime="2026-02-26 22:13:10.594841027 +0000 UTC m=+1075.674331578" Feb 26 22:13:11 crc kubenswrapper[4910]: I0226 22:13:11.586797 4910 generic.go:334] "Generic (PLEG): container finished" podID="a63789b4-9f3a-4ee0-ab34-8f79337060e2" containerID="7514051d942471cd5990f206935943323a8493ea76a5a62c2df07a0ee7d96a3f" exitCode=0 Feb 26 22:13:11 crc kubenswrapper[4910]: I0226 22:13:11.586873 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerDied","Data":"7514051d942471cd5990f206935943323a8493ea76a5a62c2df07a0ee7d96a3f"} Feb 26 22:13:12 crc kubenswrapper[4910]: I0226 22:13:12.600418 4910 generic.go:334] "Generic (PLEG): container finished" podID="a63789b4-9f3a-4ee0-ab34-8f79337060e2" containerID="464cd5bec63fa7159b9c34701da0022b522b1fbd8e054fa5747471d8db7e8fcf" exitCode=0 Feb 26 22:13:12 crc kubenswrapper[4910]: I0226 22:13:12.600548 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerDied","Data":"464cd5bec63fa7159b9c34701da0022b522b1fbd8e054fa5747471d8db7e8fcf"} Feb 26 22:13:13 crc kubenswrapper[4910]: I0226 22:13:13.313155 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-24r95" Feb 26 22:13:13 crc kubenswrapper[4910]: I0226 22:13:13.617475 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerStarted","Data":"9780268074de700878fdb69614869503fda862f97ef9b501e062e1dea198d775"} Feb 26 22:13:13 crc kubenswrapper[4910]: I0226 22:13:13.617515 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerStarted","Data":"a46d3a452988e9b7fdb260cfd2b920ab164344968b409349da1f2441e15e8ed5"} Feb 26 22:13:13 crc kubenswrapper[4910]: I0226 22:13:13.617525 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerStarted","Data":"04ad52fc7853355b8905635ae1b8792ab1e395ba0c98a38324e6d9c64d76daf9"} Feb 26 22:13:13 crc kubenswrapper[4910]: I0226 22:13:13.617534 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerStarted","Data":"04ae4d84c5e397917da2783aa992db6b56d16e93616c95fde337d69d33fc945f"} Feb 26 22:13:13 crc kubenswrapper[4910]: I0226 22:13:13.617542 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerStarted","Data":"cc9124f53a762f802dfbccde7daec4a5d8ead358e8c72acec052833144baae87"} Feb 26 22:13:14 crc kubenswrapper[4910]: I0226 22:13:14.634101 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l59qt" event={"ID":"a63789b4-9f3a-4ee0-ab34-8f79337060e2","Type":"ContainerStarted","Data":"3df7fd85d8818eef2a63ed130cb8d07cc57f1e569d2cc1013b4dfacb135f6d25"} Feb 26 22:13:14 crc kubenswrapper[4910]: I0226 22:13:14.678908 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-l59qt" podStartSLOduration=5.835854134 podStartE2EDuration="13.678888632s" podCreationTimestamp="2026-02-26 22:13:01 +0000 UTC" firstStartedPulling="2026-02-26 22:13:01.875262489 +0000 UTC m=+1066.954753040" lastFinishedPulling="2026-02-26 22:13:09.718296987 +0000 UTC m=+1074.797787538" observedRunningTime="2026-02-26 22:13:14.672421097 +0000 UTC m=+1079.751911678" watchObservedRunningTime="2026-02-26 22:13:14.678888632 +0000 UTC m=+1079.758379183" Feb 26 22:13:15 crc kubenswrapper[4910]: I0226 22:13:15.643033 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.203989 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pbfmt"] Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.204961 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pbfmt" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.208490 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8zhgh" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.210347 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.211572 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.224604 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pbfmt"] Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.332384 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh7jd\" (UniqueName: \"kubernetes.io/projected/fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa-kube-api-access-wh7jd\") pod \"openstack-operator-index-pbfmt\" (UID: \"fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa\") " pod="openstack-operators/openstack-operator-index-pbfmt" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.433517 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh7jd\" (UniqueName: \"kubernetes.io/projected/fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa-kube-api-access-wh7jd\") pod \"openstack-operator-index-pbfmt\" (UID: \"fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa\") " pod="openstack-operators/openstack-operator-index-pbfmt" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.477041 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh7jd\" (UniqueName: \"kubernetes.io/projected/fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa-kube-api-access-wh7jd\") pod \"openstack-operator-index-pbfmt\" (UID: \"fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa\") " pod="openstack-operators/openstack-operator-index-pbfmt" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.524901 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pbfmt" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.734601 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:16 crc kubenswrapper[4910]: I0226 22:13:16.771878 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:17 crc kubenswrapper[4910]: I0226 22:13:17.065066 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pbfmt"] Feb 26 22:13:17 crc kubenswrapper[4910]: W0226 22:13:17.074050 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc0a8756_3c6f_4c29_b47e_5e47ab6b7baa.slice/crio-6c04abc2c5a9304030857d428629ace9ca50e51a193258d5f0facbe76b9cafd2 WatchSource:0}: Error finding container 6c04abc2c5a9304030857d428629ace9ca50e51a193258d5f0facbe76b9cafd2: Status 404 returned error can't find the container with id 6c04abc2c5a9304030857d428629ace9ca50e51a193258d5f0facbe76b9cafd2 Feb 26 22:13:17 crc kubenswrapper[4910]: I0226 22:13:17.667146 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pbfmt" event={"ID":"fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa","Type":"ContainerStarted","Data":"6c04abc2c5a9304030857d428629ace9ca50e51a193258d5f0facbe76b9cafd2"} Feb 26 22:13:19 crc kubenswrapper[4910]: I0226 22:13:19.580434 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pbfmt"] Feb 26 22:13:20 crc kubenswrapper[4910]: I0226 22:13:20.190985 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2wxf5"] Feb 26 22:13:20 crc kubenswrapper[4910]: I0226 22:13:20.192275 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2wxf5" Feb 26 22:13:20 crc kubenswrapper[4910]: I0226 22:13:20.214395 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2wxf5"] Feb 26 22:13:20 crc kubenswrapper[4910]: I0226 22:13:20.291201 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2b7v\" (UniqueName: \"kubernetes.io/projected/a9343243-68ce-4316-bac0-563847d32436-kube-api-access-v2b7v\") pod \"openstack-operator-index-2wxf5\" (UID: \"a9343243-68ce-4316-bac0-563847d32436\") " pod="openstack-operators/openstack-operator-index-2wxf5" Feb 26 22:13:20 crc kubenswrapper[4910]: I0226 22:13:20.393026 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2b7v\" (UniqueName: \"kubernetes.io/projected/a9343243-68ce-4316-bac0-563847d32436-kube-api-access-v2b7v\") pod \"openstack-operator-index-2wxf5\" (UID: \"a9343243-68ce-4316-bac0-563847d32436\") " pod="openstack-operators/openstack-operator-index-2wxf5" Feb 26 22:13:20 crc kubenswrapper[4910]: I0226 22:13:20.416357 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2b7v\" (UniqueName: \"kubernetes.io/projected/a9343243-68ce-4316-bac0-563847d32436-kube-api-access-v2b7v\") pod \"openstack-operator-index-2wxf5\" (UID: \"a9343243-68ce-4316-bac0-563847d32436\") " pod="openstack-operators/openstack-operator-index-2wxf5" Feb 26 22:13:20 crc kubenswrapper[4910]: I0226 22:13:20.528319 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2wxf5" Feb 26 22:13:21 crc kubenswrapper[4910]: I0226 22:13:21.300546 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2wxf5"] Feb 26 22:13:21 crc kubenswrapper[4910]: I0226 22:13:21.708082 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pbfmt" event={"ID":"fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa","Type":"ContainerStarted","Data":"0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85"} Feb 26 22:13:21 crc kubenswrapper[4910]: I0226 22:13:21.708283 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pbfmt" podUID="fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa" containerName="registry-server" containerID="cri-o://0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85" gracePeriod=2 Feb 26 22:13:21 crc kubenswrapper[4910]: I0226 22:13:21.709812 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2wxf5" event={"ID":"a9343243-68ce-4316-bac0-563847d32436","Type":"ContainerStarted","Data":"4498845168b0f5486eeff7b11aedfc711cd35ba3b6f4e10208581d9ca6e98437"} Feb 26 22:13:21 crc kubenswrapper[4910]: I0226 22:13:21.709854 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2wxf5" event={"ID":"a9343243-68ce-4316-bac0-563847d32436","Type":"ContainerStarted","Data":"53068a310801e61345d1878f3eae1f6f76a2e6ab68d6aa2c7e9af1af9f452500"} Feb 26 22:13:21 crc kubenswrapper[4910]: I0226 22:13:21.731886 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pbfmt" podStartSLOduration=1.6899707290000001 podStartE2EDuration="5.731859473s" podCreationTimestamp="2026-02-26 22:13:16 +0000 UTC" firstStartedPulling="2026-02-26 22:13:17.077508318 +0000 UTC m=+1082.156998859" lastFinishedPulling="2026-02-26 22:13:21.119397062 +0000 UTC m=+1086.198887603" observedRunningTime="2026-02-26 22:13:21.726260891 +0000 UTC m=+1086.805751442" watchObservedRunningTime="2026-02-26 22:13:21.731859473 +0000 UTC m=+1086.811350004" Feb 26 22:13:21 crc kubenswrapper[4910]: I0226 22:13:21.832355 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-bvlbc" Feb 26 22:13:21 crc kubenswrapper[4910]: I0226 22:13:21.854677 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2wxf5" podStartSLOduration=1.799889903 podStartE2EDuration="1.854662907s" podCreationTimestamp="2026-02-26 22:13:20 +0000 UTC" firstStartedPulling="2026-02-26 22:13:21.315688805 +0000 UTC m=+1086.395179356" lastFinishedPulling="2026-02-26 22:13:21.370461819 +0000 UTC m=+1086.449952360" observedRunningTime="2026-02-26 22:13:21.751361881 +0000 UTC m=+1086.830852422" watchObservedRunningTime="2026-02-26 22:13:21.854662907 +0000 UTC m=+1086.934153448" Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.154315 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pbfmt" Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.225888 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh7jd\" (UniqueName: \"kubernetes.io/projected/fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa-kube-api-access-wh7jd\") pod \"fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa\" (UID: \"fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa\") " Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.233236 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa-kube-api-access-wh7jd" (OuterVolumeSpecName: "kube-api-access-wh7jd") pod "fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa" (UID: "fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa"). InnerVolumeSpecName "kube-api-access-wh7jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.327696 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-f57cb" Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.327809 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh7jd\" (UniqueName: \"kubernetes.io/projected/fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa-kube-api-access-wh7jd\") on node \"crc\" DevicePath \"\"" Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.719921 4910 generic.go:334] "Generic (PLEG): container finished" podID="fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa" containerID="0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85" exitCode=0 Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.719986 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pbfmt" event={"ID":"fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa","Type":"ContainerDied","Data":"0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85"} Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.720022 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pbfmt" Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.720655 4910 scope.go:117] "RemoveContainer" containerID="0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85" Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.720556 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pbfmt" event={"ID":"fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa","Type":"ContainerDied","Data":"6c04abc2c5a9304030857d428629ace9ca50e51a193258d5f0facbe76b9cafd2"} Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.753476 4910 scope.go:117] "RemoveContainer" containerID="0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85" Feb 26 22:13:22 crc kubenswrapper[4910]: E0226 22:13:22.753880 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85\": container with ID starting with 0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85 not found: ID does not exist" containerID="0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85" Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.753908 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85"} err="failed to get container status \"0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85\": rpc error: code = NotFound desc = could not find container \"0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85\": container with ID starting with 0e17c5bef37538709498eb7942a5f7485d38065b0fed9259a34dbcf851317e85 not found: ID does not exist" Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.764132 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pbfmt"] Feb 26 22:13:22 crc kubenswrapper[4910]: I0226 22:13:22.774749 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pbfmt"] Feb 26 22:13:23 crc kubenswrapper[4910]: I0226 22:13:23.930632 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa" path="/var/lib/kubelet/pods/fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa/volumes" Feb 26 22:13:25 crc kubenswrapper[4910]: I0226 22:13:25.728072 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:13:25 crc kubenswrapper[4910]: I0226 22:13:25.729362 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:13:30 crc kubenswrapper[4910]: I0226 22:13:30.529492 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2wxf5" Feb 26 22:13:30 crc kubenswrapper[4910]: I0226 22:13:30.530192 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2wxf5" Feb 26 22:13:30 crc kubenswrapper[4910]: I0226 22:13:30.573584 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2wxf5" Feb 26 22:13:30 crc kubenswrapper[4910]: I0226 22:13:30.820342 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2wxf5" Feb 26 22:13:31 crc kubenswrapper[4910]: I0226 22:13:31.744857 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-l59qt" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.682642 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf"] Feb 26 22:13:32 crc kubenswrapper[4910]: E0226 22:13:32.682910 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa" containerName="registry-server" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.682926 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa" containerName="registry-server" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.683068 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc0a8756-3c6f-4c29-b47e-5e47ab6b7baa" containerName="registry-server" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.684103 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.687207 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xkhw6" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.693012 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf"] Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.798281 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-util\") pod \"2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.798335 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-bundle\") pod \"2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.798482 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prn6c\" (UniqueName: \"kubernetes.io/projected/e9b2f54c-fb64-4558-8e94-a42502a023f4-kube-api-access-prn6c\") pod \"2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.899874 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-util\") pod \"2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.899956 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-bundle\") pod \"2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.900061 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prn6c\" (UniqueName: \"kubernetes.io/projected/e9b2f54c-fb64-4558-8e94-a42502a023f4-kube-api-access-prn6c\") pod \"2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.900634 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-util\") pod \"2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.900773 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-bundle\") pod \"2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:32 crc kubenswrapper[4910]: I0226 22:13:32.931305 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prn6c\" (UniqueName: \"kubernetes.io/projected/e9b2f54c-fb64-4558-8e94-a42502a023f4-kube-api-access-prn6c\") pod \"2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:33 crc kubenswrapper[4910]: I0226 22:13:33.005642 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:33 crc kubenswrapper[4910]: I0226 22:13:33.293334 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf"] Feb 26 22:13:33 crc kubenswrapper[4910]: W0226 22:13:33.300239 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b2f54c_fb64_4558_8e94_a42502a023f4.slice/crio-e2ef1ee362abeb3477cba1ef2971e096b388259491bfcb1753caddd0ed817942 WatchSource:0}: Error finding container e2ef1ee362abeb3477cba1ef2971e096b388259491bfcb1753caddd0ed817942: Status 404 returned error can't find the container with id e2ef1ee362abeb3477cba1ef2971e096b388259491bfcb1753caddd0ed817942 Feb 26 22:13:33 crc kubenswrapper[4910]: I0226 22:13:33.814395 4910 generic.go:334] "Generic (PLEG): container finished" podID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerID="a72318e270ea64795fe9eddcd66421259edc692291c3ea3bcd0d06718870abab" exitCode=0 Feb 26 22:13:33 crc kubenswrapper[4910]: I0226 22:13:33.814716 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" event={"ID":"e9b2f54c-fb64-4558-8e94-a42502a023f4","Type":"ContainerDied","Data":"a72318e270ea64795fe9eddcd66421259edc692291c3ea3bcd0d06718870abab"} Feb 26 22:13:33 crc kubenswrapper[4910]: I0226 22:13:33.814746 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" event={"ID":"e9b2f54c-fb64-4558-8e94-a42502a023f4","Type":"ContainerStarted","Data":"e2ef1ee362abeb3477cba1ef2971e096b388259491bfcb1753caddd0ed817942"} Feb 26 22:13:34 crc kubenswrapper[4910]: I0226 22:13:34.825967 4910 generic.go:334] "Generic (PLEG): container finished" podID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerID="78161de6e52d7bae5414b7f80a03a4748f1cb2d97d6479f550f3cb895faaa34b" exitCode=0 Feb 26 22:13:34 crc kubenswrapper[4910]: I0226 22:13:34.826111 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" event={"ID":"e9b2f54c-fb64-4558-8e94-a42502a023f4","Type":"ContainerDied","Data":"78161de6e52d7bae5414b7f80a03a4748f1cb2d97d6479f550f3cb895faaa34b"} Feb 26 22:13:35 crc kubenswrapper[4910]: I0226 22:13:35.840343 4910 generic.go:334] "Generic (PLEG): container finished" podID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerID="8677a7589a756c4a304b3f33b1a6b39132839f56bd4db9a024a25007fe0b73e5" exitCode=0 Feb 26 22:13:35 crc kubenswrapper[4910]: I0226 22:13:35.840691 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" event={"ID":"e9b2f54c-fb64-4558-8e94-a42502a023f4","Type":"ContainerDied","Data":"8677a7589a756c4a304b3f33b1a6b39132839f56bd4db9a024a25007fe0b73e5"} Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.134113 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.156071 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-util\") pod \"e9b2f54c-fb64-4558-8e94-a42502a023f4\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.156131 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prn6c\" (UniqueName: \"kubernetes.io/projected/e9b2f54c-fb64-4558-8e94-a42502a023f4-kube-api-access-prn6c\") pod \"e9b2f54c-fb64-4558-8e94-a42502a023f4\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.156338 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-bundle\") pod \"e9b2f54c-fb64-4558-8e94-a42502a023f4\" (UID: \"e9b2f54c-fb64-4558-8e94-a42502a023f4\") " Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.156766 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-bundle" (OuterVolumeSpecName: "bundle") pod "e9b2f54c-fb64-4558-8e94-a42502a023f4" (UID: "e9b2f54c-fb64-4558-8e94-a42502a023f4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.169137 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-util" (OuterVolumeSpecName: "util") pod "e9b2f54c-fb64-4558-8e94-a42502a023f4" (UID: "e9b2f54c-fb64-4558-8e94-a42502a023f4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.173485 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b2f54c-fb64-4558-8e94-a42502a023f4-kube-api-access-prn6c" (OuterVolumeSpecName: "kube-api-access-prn6c") pod "e9b2f54c-fb64-4558-8e94-a42502a023f4" (UID: "e9b2f54c-fb64-4558-8e94-a42502a023f4"). InnerVolumeSpecName "kube-api-access-prn6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.258236 4910 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.258278 4910 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e9b2f54c-fb64-4558-8e94-a42502a023f4-util\") on node \"crc\" DevicePath \"\"" Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.258293 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prn6c\" (UniqueName: \"kubernetes.io/projected/e9b2f54c-fb64-4558-8e94-a42502a023f4-kube-api-access-prn6c\") on node \"crc\" DevicePath \"\"" Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.866862 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" event={"ID":"e9b2f54c-fb64-4558-8e94-a42502a023f4","Type":"ContainerDied","Data":"e2ef1ee362abeb3477cba1ef2971e096b388259491bfcb1753caddd0ed817942"} Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.866922 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ef1ee362abeb3477cba1ef2971e096b388259491bfcb1753caddd0ed817942" Feb 26 22:13:37 crc kubenswrapper[4910]: I0226 22:13:37.866980 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.579838 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz"] Feb 26 22:13:43 crc kubenswrapper[4910]: E0226 22:13:43.580594 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerName="util" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.580637 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerName="util" Feb 26 22:13:43 crc kubenswrapper[4910]: E0226 22:13:43.580659 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerName="pull" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.580669 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerName="pull" Feb 26 22:13:43 crc kubenswrapper[4910]: E0226 22:13:43.580682 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerName="extract" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.580690 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerName="extract" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.580849 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b2f54c-fb64-4558-8e94-a42502a023f4" containerName="extract" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.581408 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.585447 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-6jqlf" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.620893 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz"] Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.743011 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkfbf\" (UniqueName: \"kubernetes.io/projected/4d146c5d-99f9-4731-a825-620f150b91e5-kube-api-access-vkfbf\") pod \"openstack-operator-controller-init-6644c86975-7wcgz\" (UID: \"4d146c5d-99f9-4731-a825-620f150b91e5\") " pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.843876 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkfbf\" (UniqueName: \"kubernetes.io/projected/4d146c5d-99f9-4731-a825-620f150b91e5-kube-api-access-vkfbf\") pod \"openstack-operator-controller-init-6644c86975-7wcgz\" (UID: \"4d146c5d-99f9-4731-a825-620f150b91e5\") " pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.880272 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkfbf\" (UniqueName: \"kubernetes.io/projected/4d146c5d-99f9-4731-a825-620f150b91e5-kube-api-access-vkfbf\") pod \"openstack-operator-controller-init-6644c86975-7wcgz\" (UID: \"4d146c5d-99f9-4731-a825-620f150b91e5\") " pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" Feb 26 22:13:43 crc kubenswrapper[4910]: I0226 22:13:43.901441 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" Feb 26 22:13:44 crc kubenswrapper[4910]: I0226 22:13:44.386785 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz"] Feb 26 22:13:44 crc kubenswrapper[4910]: I0226 22:13:44.926146 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" event={"ID":"4d146c5d-99f9-4731-a825-620f150b91e5","Type":"ContainerStarted","Data":"161137e2610a17a2de5053332bc447d29b9407a0c7c2ab8ca0b997da14a16ee5"} Feb 26 22:13:47 crc kubenswrapper[4910]: I0226 22:13:47.986541 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" event={"ID":"4d146c5d-99f9-4731-a825-620f150b91e5","Type":"ContainerStarted","Data":"6f6c43cf3d8581aa3851af42980032bed2c591dc47178f476a88e1383a6c4bff"} Feb 26 22:13:47 crc kubenswrapper[4910]: I0226 22:13:47.986914 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" Feb 26 22:13:53 crc kubenswrapper[4910]: I0226 22:13:53.913380 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" Feb 26 22:13:53 crc kubenswrapper[4910]: I0226 22:13:53.972041 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6644c86975-7wcgz" podStartSLOduration=7.879603589 podStartE2EDuration="10.972025016s" podCreationTimestamp="2026-02-26 22:13:43 +0000 UTC" firstStartedPulling="2026-02-26 22:13:44.420142848 +0000 UTC m=+1109.499633389" lastFinishedPulling="2026-02-26 22:13:47.512564265 +0000 UTC m=+1112.592054816" observedRunningTime="2026-02-26 22:13:48.026719681 +0000 UTC m=+1113.106210232" watchObservedRunningTime="2026-02-26 22:13:53.972025016 +0000 UTC m=+1119.051515567" Feb 26 22:13:55 crc kubenswrapper[4910]: I0226 22:13:55.727852 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:13:55 crc kubenswrapper[4910]: I0226 22:13:55.727940 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.178348 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535734-smxt4"] Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.179597 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535734-smxt4" Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.181993 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.182412 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.183987 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.197192 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535734-smxt4"] Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.317805 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnkfj\" (UniqueName: \"kubernetes.io/projected/e52b70ce-11c2-4ff2-a4da-18f7f20d6712-kube-api-access-cnkfj\") pod \"auto-csr-approver-29535734-smxt4\" (UID: \"e52b70ce-11c2-4ff2-a4da-18f7f20d6712\") " pod="openshift-infra/auto-csr-approver-29535734-smxt4" Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.419707 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnkfj\" (UniqueName: \"kubernetes.io/projected/e52b70ce-11c2-4ff2-a4da-18f7f20d6712-kube-api-access-cnkfj\") pod \"auto-csr-approver-29535734-smxt4\" (UID: \"e52b70ce-11c2-4ff2-a4da-18f7f20d6712\") " pod="openshift-infra/auto-csr-approver-29535734-smxt4" Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.449787 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnkfj\" (UniqueName: \"kubernetes.io/projected/e52b70ce-11c2-4ff2-a4da-18f7f20d6712-kube-api-access-cnkfj\") pod \"auto-csr-approver-29535734-smxt4\" (UID: \"e52b70ce-11c2-4ff2-a4da-18f7f20d6712\") " pod="openshift-infra/auto-csr-approver-29535734-smxt4" Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.493994 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535734-smxt4" Feb 26 22:14:00 crc kubenswrapper[4910]: I0226 22:14:00.729730 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535734-smxt4"] Feb 26 22:14:00 crc kubenswrapper[4910]: W0226 22:14:00.735481 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode52b70ce_11c2_4ff2_a4da_18f7f20d6712.slice/crio-fa1075be651fb6ed737c95d95963ae8a9c96c5a009703f8d0aa52c1b0006bded WatchSource:0}: Error finding container fa1075be651fb6ed737c95d95963ae8a9c96c5a009703f8d0aa52c1b0006bded: Status 404 returned error can't find the container with id fa1075be651fb6ed737c95d95963ae8a9c96c5a009703f8d0aa52c1b0006bded Feb 26 22:14:01 crc kubenswrapper[4910]: I0226 22:14:01.081623 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535734-smxt4" event={"ID":"e52b70ce-11c2-4ff2-a4da-18f7f20d6712","Type":"ContainerStarted","Data":"fa1075be651fb6ed737c95d95963ae8a9c96c5a009703f8d0aa52c1b0006bded"} Feb 26 22:14:03 crc kubenswrapper[4910]: I0226 22:14:03.095788 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535734-smxt4" event={"ID":"e52b70ce-11c2-4ff2-a4da-18f7f20d6712","Type":"ContainerStarted","Data":"7b8430bef34af7693be6574a606502a50becaac975bd436b68fae844775e5231"} Feb 26 22:14:03 crc kubenswrapper[4910]: I0226 22:14:03.123202 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535734-smxt4" podStartSLOduration=1.068959241 podStartE2EDuration="3.123185462s" podCreationTimestamp="2026-02-26 22:14:00 +0000 UTC" firstStartedPulling="2026-02-26 22:14:00.737863775 +0000 UTC m=+1125.817354316" lastFinishedPulling="2026-02-26 22:14:02.792089996 +0000 UTC m=+1127.871580537" observedRunningTime="2026-02-26 22:14:03.119774178 +0000 UTC m=+1128.199264749" watchObservedRunningTime="2026-02-26 22:14:03.123185462 +0000 UTC m=+1128.202676003" Feb 26 22:14:04 crc kubenswrapper[4910]: I0226 22:14:04.107813 4910 generic.go:334] "Generic (PLEG): container finished" podID="e52b70ce-11c2-4ff2-a4da-18f7f20d6712" containerID="7b8430bef34af7693be6574a606502a50becaac975bd436b68fae844775e5231" exitCode=0 Feb 26 22:14:04 crc kubenswrapper[4910]: I0226 22:14:04.108062 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535734-smxt4" event={"ID":"e52b70ce-11c2-4ff2-a4da-18f7f20d6712","Type":"ContainerDied","Data":"7b8430bef34af7693be6574a606502a50becaac975bd436b68fae844775e5231"} Feb 26 22:14:05 crc kubenswrapper[4910]: I0226 22:14:05.413793 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535734-smxt4" Feb 26 22:14:05 crc kubenswrapper[4910]: I0226 22:14:05.589075 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnkfj\" (UniqueName: \"kubernetes.io/projected/e52b70ce-11c2-4ff2-a4da-18f7f20d6712-kube-api-access-cnkfj\") pod \"e52b70ce-11c2-4ff2-a4da-18f7f20d6712\" (UID: \"e52b70ce-11c2-4ff2-a4da-18f7f20d6712\") " Feb 26 22:14:05 crc kubenswrapper[4910]: I0226 22:14:05.598365 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52b70ce-11c2-4ff2-a4da-18f7f20d6712-kube-api-access-cnkfj" (OuterVolumeSpecName: "kube-api-access-cnkfj") pod "e52b70ce-11c2-4ff2-a4da-18f7f20d6712" (UID: "e52b70ce-11c2-4ff2-a4da-18f7f20d6712"). InnerVolumeSpecName "kube-api-access-cnkfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:14:05 crc kubenswrapper[4910]: I0226 22:14:05.691022 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnkfj\" (UniqueName: \"kubernetes.io/projected/e52b70ce-11c2-4ff2-a4da-18f7f20d6712-kube-api-access-cnkfj\") on node \"crc\" DevicePath \"\"" Feb 26 22:14:06 crc kubenswrapper[4910]: I0226 22:14:06.126259 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535734-smxt4" event={"ID":"e52b70ce-11c2-4ff2-a4da-18f7f20d6712","Type":"ContainerDied","Data":"fa1075be651fb6ed737c95d95963ae8a9c96c5a009703f8d0aa52c1b0006bded"} Feb 26 22:14:06 crc kubenswrapper[4910]: I0226 22:14:06.126340 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535734-smxt4" Feb 26 22:14:06 crc kubenswrapper[4910]: I0226 22:14:06.126353 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa1075be651fb6ed737c95d95963ae8a9c96c5a009703f8d0aa52c1b0006bded" Feb 26 22:14:06 crc kubenswrapper[4910]: I0226 22:14:06.236647 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535728-5rcf6"] Feb 26 22:14:06 crc kubenswrapper[4910]: I0226 22:14:06.240394 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535728-5rcf6"] Feb 26 22:14:07 crc kubenswrapper[4910]: I0226 22:14:07.914007 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de787e8-4b1f-457c-baa4-fa07b7b56e73" path="/var/lib/kubelet/pods/1de787e8-4b1f-457c-baa4-fa07b7b56e73/volumes" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.641193 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg"] Feb 26 22:14:14 crc kubenswrapper[4910]: E0226 22:14:14.641657 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52b70ce-11c2-4ff2-a4da-18f7f20d6712" containerName="oc" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.641669 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52b70ce-11c2-4ff2-a4da-18f7f20d6712" containerName="oc" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.641787 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52b70ce-11c2-4ff2-a4da-18f7f20d6712" containerName="oc" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.642325 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.643975 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mqqhw" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.645214 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.645955 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.647592 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-h95qp" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.654328 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.658998 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.659802 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.661879 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6q6ds" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.691237 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.707228 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.715911 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.716882 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.719535 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-42rw8" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.719808 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.729761 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.730554 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.751360 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct2s6\" (UniqueName: \"kubernetes.io/projected/1326400f-df88-407f-807c-05182d879101-kube-api-access-ct2s6\") pod \"cinder-operator-controller-manager-55d77d7b5c-7n5lg\" (UID: \"1326400f-df88-407f-807c-05182d879101\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.755588 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k49r\" (UniqueName: \"kubernetes.io/projected/71e40b76-d83e-41f5-a184-5d062a8291e4-kube-api-access-2k49r\") pod \"glance-operator-controller-manager-784b5bb6c5-l7m5c\" (UID: \"71e40b76-d83e-41f5-a184-5d062a8291e4\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.757202 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-xcdns" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.780914 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.849014 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.854013 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.857608 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-j9hll"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.858230 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-c7g29" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.858487 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.861903 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.862118 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-krw5r" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.863230 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.864144 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zqgp\" (UniqueName: \"kubernetes.io/projected/19d23d2a-dce7-45c4-9cbd-ae14e8205aa7-kube-api-access-9zqgp\") pod \"horizon-operator-controller-manager-5b9b8895d5-lkjvb\" (UID: \"19d23d2a-dce7-45c4-9cbd-ae14e8205aa7\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.864203 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2c4\" (UniqueName: \"kubernetes.io/projected/6a6c3a70-66b0-4b20-b4cd-fa1d8fbc228e-kube-api-access-7l2c4\") pod \"barbican-operator-controller-manager-868647ff47-nrbns\" (UID: \"6a6c3a70-66b0-4b20-b4cd-fa1d8fbc228e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.864296 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct2s6\" (UniqueName: \"kubernetes.io/projected/1326400f-df88-407f-807c-05182d879101-kube-api-access-ct2s6\") pod \"cinder-operator-controller-manager-55d77d7b5c-7n5lg\" (UID: \"1326400f-df88-407f-807c-05182d879101\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.864328 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k49r\" (UniqueName: \"kubernetes.io/projected/71e40b76-d83e-41f5-a184-5d062a8291e4-kube-api-access-2k49r\") pod \"glance-operator-controller-manager-784b5bb6c5-l7m5c\" (UID: \"71e40b76-d83e-41f5-a184-5d062a8291e4\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.864398 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt9cq\" (UniqueName: \"kubernetes.io/projected/97af21d2-e5ce-468b-bbf9-0e663577a30b-kube-api-access-mt9cq\") pod \"designate-operator-controller-manager-6d8bf5c495-t4bqx\" (UID: \"97af21d2-e5ce-468b-bbf9-0e663577a30b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.864425 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8kf\" (UniqueName: \"kubernetes.io/projected/1f356802-8833-4a65-8e65-c9bab59c1080-kube-api-access-8w8kf\") pod \"heat-operator-controller-manager-69f49c598c-mqd2x\" (UID: \"1f356802-8833-4a65-8e65-c9bab59c1080\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.879500 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-j9hll"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.888809 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.889919 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.898451 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hpgrw" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.900955 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct2s6\" (UniqueName: \"kubernetes.io/projected/1326400f-df88-407f-807c-05182d879101-kube-api-access-ct2s6\") pod \"cinder-operator-controller-manager-55d77d7b5c-7n5lg\" (UID: \"1326400f-df88-407f-807c-05182d879101\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.903934 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.904759 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.905120 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k49r\" (UniqueName: \"kubernetes.io/projected/71e40b76-d83e-41f5-a184-5d062a8291e4-kube-api-access-2k49r\") pod \"glance-operator-controller-manager-784b5bb6c5-l7m5c\" (UID: \"71e40b76-d83e-41f5-a184-5d062a8291e4\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.915320 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-tpxbr" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.926041 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.932123 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.953219 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.953992 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.958278 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.958580 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.959643 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.965941 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-74nph" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.966264 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hnh25" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968335 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zlfk\" (UniqueName: \"kubernetes.io/projected/baa339f1-af25-4789-899c-b6ffed7c4ac0-kube-api-access-8zlfk\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968383 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qsst\" (UniqueName: \"kubernetes.io/projected/1d3bc056-7a65-4188-b408-66892dbc6c86-kube-api-access-9qsst\") pod \"ironic-operator-controller-manager-554564d7fc-dt9gd\" (UID: \"1d3bc056-7a65-4188-b408-66892dbc6c86\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968407 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndb7f\" (UniqueName: \"kubernetes.io/projected/1e643756-1a6a-4654-af77-5b9d0f1433f2-kube-api-access-ndb7f\") pod \"keystone-operator-controller-manager-b4d948c87-dkwl5\" (UID: \"1e643756-1a6a-4654-af77-5b9d0f1433f2\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968451 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968489 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt9cq\" (UniqueName: \"kubernetes.io/projected/97af21d2-e5ce-468b-bbf9-0e663577a30b-kube-api-access-mt9cq\") pod \"designate-operator-controller-manager-6d8bf5c495-t4bqx\" (UID: \"97af21d2-e5ce-468b-bbf9-0e663577a30b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968512 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8kf\" (UniqueName: \"kubernetes.io/projected/1f356802-8833-4a65-8e65-c9bab59c1080-kube-api-access-8w8kf\") pod \"heat-operator-controller-manager-69f49c598c-mqd2x\" (UID: \"1f356802-8833-4a65-8e65-c9bab59c1080\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968538 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hlxl\" (UniqueName: \"kubernetes.io/projected/aff3f8d4-51e9-4557-bc9d-497d587f667a-kube-api-access-8hlxl\") pod \"mariadb-operator-controller-manager-6994f66f48-smv6m\" (UID: \"aff3f8d4-51e9-4557-bc9d-497d587f667a\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968558 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zqgp\" (UniqueName: \"kubernetes.io/projected/19d23d2a-dce7-45c4-9cbd-ae14e8205aa7-kube-api-access-9zqgp\") pod \"horizon-operator-controller-manager-5b9b8895d5-lkjvb\" (UID: \"19d23d2a-dce7-45c4-9cbd-ae14e8205aa7\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968576 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l2c4\" (UniqueName: \"kubernetes.io/projected/6a6c3a70-66b0-4b20-b4cd-fa1d8fbc228e-kube-api-access-7l2c4\") pod \"barbican-operator-controller-manager-868647ff47-nrbns\" (UID: \"6a6c3a70-66b0-4b20-b4cd-fa1d8fbc228e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.968595 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55bw\" (UniqueName: \"kubernetes.io/projected/ee918cc9-f4d9-49b1-9d9e-1d37c4aa7946-kube-api-access-b55bw\") pod \"manila-operator-controller-manager-67d996989d-v9ll5\" (UID: \"ee918cc9-f4d9-49b1-9d9e-1d37c4aa7946\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.976353 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.979138 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.988326 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.989142 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-24d89"] Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.989674 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.990006 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" Feb 26 22:14:14 crc kubenswrapper[4910]: I0226 22:14:14.995327 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.010744 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l2c4\" (UniqueName: \"kubernetes.io/projected/6a6c3a70-66b0-4b20-b4cd-fa1d8fbc228e-kube-api-access-7l2c4\") pod \"barbican-operator-controller-manager-868647ff47-nrbns\" (UID: \"6a6c3a70-66b0-4b20-b4cd-fa1d8fbc228e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.011050 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-v7kmg" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.011317 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-24d89"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.011568 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qh7xq" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.015583 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8kf\" (UniqueName: \"kubernetes.io/projected/1f356802-8833-4a65-8e65-c9bab59c1080-kube-api-access-8w8kf\") pod \"heat-operator-controller-manager-69f49c598c-mqd2x\" (UID: \"1f356802-8833-4a65-8e65-c9bab59c1080\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.015646 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.016519 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.017457 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zqgp\" (UniqueName: \"kubernetes.io/projected/19d23d2a-dce7-45c4-9cbd-ae14e8205aa7-kube-api-access-9zqgp\") pod \"horizon-operator-controller-manager-5b9b8895d5-lkjvb\" (UID: \"19d23d2a-dce7-45c4-9cbd-ae14e8205aa7\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.021055 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7dfvk" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.021858 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt9cq\" (UniqueName: \"kubernetes.io/projected/97af21d2-e5ce-468b-bbf9-0e663577a30b-kube-api-access-mt9cq\") pod \"designate-operator-controller-manager-6d8bf5c495-t4bqx\" (UID: \"97af21d2-e5ce-468b-bbf9-0e663577a30b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.024982 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.043782 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.046566 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.047012 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.047055 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.048530 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.052532 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.052684 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-br79p" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.052711 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6rf67" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.055335 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.056513 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.058797 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-24rk2" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.059279 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.060423 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.062799 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nq5wx" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.064180 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069510 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qsst\" (UniqueName: \"kubernetes.io/projected/1d3bc056-7a65-4188-b408-66892dbc6c86-kube-api-access-9qsst\") pod \"ironic-operator-controller-manager-554564d7fc-dt9gd\" (UID: \"1d3bc056-7a65-4188-b408-66892dbc6c86\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069548 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndb7f\" (UniqueName: \"kubernetes.io/projected/1e643756-1a6a-4654-af77-5b9d0f1433f2-kube-api-access-ndb7f\") pod \"keystone-operator-controller-manager-b4d948c87-dkwl5\" (UID: \"1e643756-1a6a-4654-af77-5b9d0f1433f2\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069576 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbhf\" (UniqueName: \"kubernetes.io/projected/77b8f8f5-e1e3-4d68-80a1-fff99d000d3a-kube-api-access-zgbhf\") pod \"neutron-operator-controller-manager-6bd4687957-csc7d\" (UID: \"77b8f8f5-e1e3-4d68-80a1-fff99d000d3a\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069598 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2vc\" (UniqueName: \"kubernetes.io/projected/9a4baffc-2491-42c4-838b-1ef90d643817-kube-api-access-4n2vc\") pod \"ovn-operator-controller-manager-5955d8c787-vsr8s\" (UID: \"9a4baffc-2491-42c4-838b-1ef90d643817\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069627 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069665 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069704 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5x4\" (UniqueName: \"kubernetes.io/projected/12f2404a-45bb-416e-b4d4-da70f869fbbf-kube-api-access-wb5x4\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069727 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlxl\" (UniqueName: \"kubernetes.io/projected/aff3f8d4-51e9-4557-bc9d-497d587f667a-kube-api-access-8hlxl\") pod \"mariadb-operator-controller-manager-6994f66f48-smv6m\" (UID: \"aff3f8d4-51e9-4557-bc9d-497d587f667a\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069749 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bcbw\" (UniqueName: \"kubernetes.io/projected/3d9f959c-b8a6-415a-adf5-a20b0fbc3511-kube-api-access-9bcbw\") pod \"swift-operator-controller-manager-68f46476f-6pf2w\" (UID: \"3d9f959c-b8a6-415a-adf5-a20b0fbc3511\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069764 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brlps\" (UniqueName: \"kubernetes.io/projected/d2d30287-5f0f-45fd-ae7f-23614ffab2fc-kube-api-access-brlps\") pod \"octavia-operator-controller-manager-659dc6bbfc-r6zct\" (UID: \"d2d30287-5f0f-45fd-ae7f-23614ffab2fc\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069829 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b55bw\" (UniqueName: \"kubernetes.io/projected/ee918cc9-f4d9-49b1-9d9e-1d37c4aa7946-kube-api-access-b55bw\") pod \"manila-operator-controller-manager-67d996989d-v9ll5\" (UID: \"ee918cc9-f4d9-49b1-9d9e-1d37c4aa7946\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.069984 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv4g9\" (UniqueName: \"kubernetes.io/projected/f93e1099-5db6-45f0-a344-5d05183572d1-kube-api-access-wv4g9\") pod \"placement-operator-controller-manager-8497b45c89-czqwc\" (UID: \"f93e1099-5db6-45f0-a344-5d05183572d1\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.071828 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrnpk\" (UniqueName: \"kubernetes.io/projected/05b1662b-98cb-4867-9cf1-4272c173cf1f-kube-api-access-hrnpk\") pod \"nova-operator-controller-manager-567668f5cf-24d89\" (UID: \"05b1662b-98cb-4867-9cf1-4272c173cf1f\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.071909 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zlfk\" (UniqueName: \"kubernetes.io/projected/baa339f1-af25-4789-899c-b6ffed7c4ac0-kube-api-access-8zlfk\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.072474 4910 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.072548 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert podName:baa339f1-af25-4789-899c-b6ffed7c4ac0 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:15.572526708 +0000 UTC m=+1140.652017249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert") pod "infra-operator-controller-manager-79d975b745-j9hll" (UID: "baa339f1-af25-4789-899c-b6ffed7c4ac0") : secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.082942 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.087443 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qsst\" (UniqueName: \"kubernetes.io/projected/1d3bc056-7a65-4188-b408-66892dbc6c86-kube-api-access-9qsst\") pod \"ironic-operator-controller-manager-554564d7fc-dt9gd\" (UID: \"1d3bc056-7a65-4188-b408-66892dbc6c86\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.089550 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.090150 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b55bw\" (UniqueName: \"kubernetes.io/projected/ee918cc9-f4d9-49b1-9d9e-1d37c4aa7946-kube-api-access-b55bw\") pod \"manila-operator-controller-manager-67d996989d-v9ll5\" (UID: \"ee918cc9-f4d9-49b1-9d9e-1d37c4aa7946\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.090813 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.092729 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlxl\" (UniqueName: \"kubernetes.io/projected/aff3f8d4-51e9-4557-bc9d-497d587f667a-kube-api-access-8hlxl\") pod \"mariadb-operator-controller-manager-6994f66f48-smv6m\" (UID: \"aff3f8d4-51e9-4557-bc9d-497d587f667a\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.093950 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndb7f\" (UniqueName: \"kubernetes.io/projected/1e643756-1a6a-4654-af77-5b9d0f1433f2-kube-api-access-ndb7f\") pod \"keystone-operator-controller-manager-b4d948c87-dkwl5\" (UID: \"1e643756-1a6a-4654-af77-5b9d0f1433f2\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.095920 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zlfk\" (UniqueName: \"kubernetes.io/projected/baa339f1-af25-4789-899c-b6ffed7c4ac0-kube-api-access-8zlfk\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.103884 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.150527 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.173697 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.174098 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.175652 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2vc\" (UniqueName: \"kubernetes.io/projected/9a4baffc-2491-42c4-838b-1ef90d643817-kube-api-access-4n2vc\") pod \"ovn-operator-controller-manager-5955d8c787-vsr8s\" (UID: \"9a4baffc-2491-42c4-838b-1ef90d643817\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.175789 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.175884 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5x4\" (UniqueName: \"kubernetes.io/projected/12f2404a-45bb-416e-b4d4-da70f869fbbf-kube-api-access-wb5x4\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.175976 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bcbw\" (UniqueName: \"kubernetes.io/projected/3d9f959c-b8a6-415a-adf5-a20b0fbc3511-kube-api-access-9bcbw\") pod \"swift-operator-controller-manager-68f46476f-6pf2w\" (UID: \"3d9f959c-b8a6-415a-adf5-a20b0fbc3511\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.176047 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brlps\" (UniqueName: \"kubernetes.io/projected/d2d30287-5f0f-45fd-ae7f-23614ffab2fc-kube-api-access-brlps\") pod \"octavia-operator-controller-manager-659dc6bbfc-r6zct\" (UID: \"d2d30287-5f0f-45fd-ae7f-23614ffab2fc\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.176134 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv4g9\" (UniqueName: \"kubernetes.io/projected/f93e1099-5db6-45f0-a344-5d05183572d1-kube-api-access-wv4g9\") pod \"placement-operator-controller-manager-8497b45c89-czqwc\" (UID: \"f93e1099-5db6-45f0-a344-5d05183572d1\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.176348 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrnpk\" (UniqueName: \"kubernetes.io/projected/05b1662b-98cb-4867-9cf1-4272c173cf1f-kube-api-access-hrnpk\") pod \"nova-operator-controller-manager-567668f5cf-24d89\" (UID: \"05b1662b-98cb-4867-9cf1-4272c173cf1f\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.176466 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgbhf\" (UniqueName: \"kubernetes.io/projected/77b8f8f5-e1e3-4d68-80a1-fff99d000d3a-kube-api-access-zgbhf\") pod \"neutron-operator-controller-manager-6bd4687957-csc7d\" (UID: \"77b8f8f5-e1e3-4d68-80a1-fff99d000d3a\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.176790 4910 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.176886 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert podName:12f2404a-45bb-416e-b4d4-da70f869fbbf nodeName:}" failed. No retries permitted until 2026-02-26 22:14:15.676871986 +0000 UTC m=+1140.756362527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" (UID: "12f2404a-45bb-416e-b4d4-da70f869fbbf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.178898 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.180043 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ztkll" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.200187 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5x4\" (UniqueName: \"kubernetes.io/projected/12f2404a-45bb-416e-b4d4-da70f869fbbf-kube-api-access-wb5x4\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.200657 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgbhf\" (UniqueName: \"kubernetes.io/projected/77b8f8f5-e1e3-4d68-80a1-fff99d000d3a-kube-api-access-zgbhf\") pod \"neutron-operator-controller-manager-6bd4687957-csc7d\" (UID: \"77b8f8f5-e1e3-4d68-80a1-fff99d000d3a\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.201380 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2vc\" (UniqueName: \"kubernetes.io/projected/9a4baffc-2491-42c4-838b-1ef90d643817-kube-api-access-4n2vc\") pod \"ovn-operator-controller-manager-5955d8c787-vsr8s\" (UID: \"9a4baffc-2491-42c4-838b-1ef90d643817\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.204897 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bcbw\" (UniqueName: \"kubernetes.io/projected/3d9f959c-b8a6-415a-adf5-a20b0fbc3511-kube-api-access-9bcbw\") pod \"swift-operator-controller-manager-68f46476f-6pf2w\" (UID: \"3d9f959c-b8a6-415a-adf5-a20b0fbc3511\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.207212 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv4g9\" (UniqueName: \"kubernetes.io/projected/f93e1099-5db6-45f0-a344-5d05183572d1-kube-api-access-wv4g9\") pod \"placement-operator-controller-manager-8497b45c89-czqwc\" (UID: \"f93e1099-5db6-45f0-a344-5d05183572d1\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.208426 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.208875 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brlps\" (UniqueName: \"kubernetes.io/projected/d2d30287-5f0f-45fd-ae7f-23614ffab2fc-kube-api-access-brlps\") pod \"octavia-operator-controller-manager-659dc6bbfc-r6zct\" (UID: \"d2d30287-5f0f-45fd-ae7f-23614ffab2fc\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.223657 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrnpk\" (UniqueName: \"kubernetes.io/projected/05b1662b-98cb-4867-9cf1-4272c173cf1f-kube-api-access-hrnpk\") pod \"nova-operator-controller-manager-567668f5cf-24d89\" (UID: \"05b1662b-98cb-4867-9cf1-4272c173cf1f\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.254447 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.271809 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.281537 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.282168 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h2x2\" (UniqueName: \"kubernetes.io/projected/4581d31c-adac-40f4-80ec-53142bc04c02-kube-api-access-4h2x2\") pod \"telemetry-operator-controller-manager-85bcd67d77-jrqfq\" (UID: \"4581d31c-adac-40f4-80ec-53142bc04c02\") " pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.293230 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.295080 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.298011 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.299306 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-tgjrf" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.315242 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.340070 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.340906 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.344963 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-v8r7v" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.358114 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.362288 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.375945 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.384924 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h2x2\" (UniqueName: \"kubernetes.io/projected/4581d31c-adac-40f4-80ec-53142bc04c02-kube-api-access-4h2x2\") pod \"telemetry-operator-controller-manager-85bcd67d77-jrqfq\" (UID: \"4581d31c-adac-40f4-80ec-53142bc04c02\") " pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.385029 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98h86\" (UniqueName: \"kubernetes.io/projected/3b1b5fc5-da86-41ae-996e-9273627c5e62-kube-api-access-98h86\") pod \"watcher-operator-controller-manager-bccc79885-jjxzk\" (UID: \"3b1b5fc5-da86-41ae-996e-9273627c5e62\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.385077 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8hbq\" (UniqueName: \"kubernetes.io/projected/9fad2a9b-74b5-4bbf-a031-949aef704413-kube-api-access-p8hbq\") pod \"test-operator-controller-manager-5dc6794d5b-4q6xr\" (UID: \"9fad2a9b-74b5-4bbf-a031-949aef704413\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.397506 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.397880 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.398735 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.401054 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xw6nc" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.401380 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.401975 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.403374 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.404554 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.427655 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h2x2\" (UniqueName: \"kubernetes.io/projected/4581d31c-adac-40f4-80ec-53142bc04c02-kube-api-access-4h2x2\") pod \"telemetry-operator-controller-manager-85bcd67d77-jrqfq\" (UID: \"4581d31c-adac-40f4-80ec-53142bc04c02\") " pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.444002 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.455138 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.455969 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.461383 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d75bl" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.471502 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.478895 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.486918 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk4p6\" (UniqueName: \"kubernetes.io/projected/f3c3500f-cc15-4b62-b13f-b99aeb97a413-kube-api-access-sk4p6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b7vkl\" (UID: \"f3c3500f-cc15-4b62-b13f-b99aeb97a413\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.487008 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98h86\" (UniqueName: \"kubernetes.io/projected/3b1b5fc5-da86-41ae-996e-9273627c5e62-kube-api-access-98h86\") pod \"watcher-operator-controller-manager-bccc79885-jjxzk\" (UID: \"3b1b5fc5-da86-41ae-996e-9273627c5e62\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.487044 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.487062 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.487094 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8hbq\" (UniqueName: \"kubernetes.io/projected/9fad2a9b-74b5-4bbf-a031-949aef704413-kube-api-access-p8hbq\") pod \"test-operator-controller-manager-5dc6794d5b-4q6xr\" (UID: \"9fad2a9b-74b5-4bbf-a031-949aef704413\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.487134 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-974vt\" (UniqueName: \"kubernetes.io/projected/bfeb8151-7f06-4d68-a3a2-d4c267563b43-kube-api-access-974vt\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.496860 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.510602 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8hbq\" (UniqueName: \"kubernetes.io/projected/9fad2a9b-74b5-4bbf-a031-949aef704413-kube-api-access-p8hbq\") pod \"test-operator-controller-manager-5dc6794d5b-4q6xr\" (UID: \"9fad2a9b-74b5-4bbf-a031-949aef704413\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.521011 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.525696 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98h86\" (UniqueName: \"kubernetes.io/projected/3b1b5fc5-da86-41ae-996e-9273627c5e62-kube-api-access-98h86\") pod \"watcher-operator-controller-manager-bccc79885-jjxzk\" (UID: \"3b1b5fc5-da86-41ae-996e-9273627c5e62\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.575877 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.588914 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.588979 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.588999 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.589059 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-974vt\" (UniqueName: \"kubernetes.io/projected/bfeb8151-7f06-4d68-a3a2-d4c267563b43-kube-api-access-974vt\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.589089 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4p6\" (UniqueName: \"kubernetes.io/projected/f3c3500f-cc15-4b62-b13f-b99aeb97a413-kube-api-access-sk4p6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b7vkl\" (UID: \"f3c3500f-cc15-4b62-b13f-b99aeb97a413\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.589260 4910 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.589302 4910 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.589332 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:16.089311127 +0000 UTC m=+1141.168801668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.589471 4910 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.589569 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert podName:baa339f1-af25-4789-899c-b6ffed7c4ac0 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:16.589546023 +0000 UTC m=+1141.669036564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert") pod "infra-operator-controller-manager-79d975b745-j9hll" (UID: "baa339f1-af25-4789-899c-b6ffed7c4ac0") : secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.589586 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:16.089579854 +0000 UTC m=+1141.169070395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "metrics-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.617242 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk4p6\" (UniqueName: \"kubernetes.io/projected/f3c3500f-cc15-4b62-b13f-b99aeb97a413-kube-api-access-sk4p6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b7vkl\" (UID: \"f3c3500f-cc15-4b62-b13f-b99aeb97a413\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.618524 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.622676 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-974vt\" (UniqueName: \"kubernetes.io/projected/bfeb8151-7f06-4d68-a3a2-d4c267563b43-kube-api-access-974vt\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.701861 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.702362 4910 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: E0226 22:14:15.702410 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert podName:12f2404a-45bb-416e-b4d4-da70f869fbbf nodeName:}" failed. No retries permitted until 2026-02-26 22:14:16.702396093 +0000 UTC m=+1141.781886634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" (UID: "12f2404a-45bb-416e-b4d4-da70f869fbbf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.714629 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.719488 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.791991 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.877746 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x"] Feb 26 22:14:15 crc kubenswrapper[4910]: I0226 22:14:15.994646 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.141062 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.141103 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.141276 4910 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.141324 4910 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.141375 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:17.141355122 +0000 UTC m=+1142.220845663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "webhook-server-cert" not found Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.141393 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:17.141386423 +0000 UTC m=+1142.220876964 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "metrics-server-cert" not found Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.218386 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" event={"ID":"19d23d2a-dce7-45c4-9cbd-ae14e8205aa7","Type":"ContainerStarted","Data":"e40d864ac7ebc2e048b02ac75b09246cbcf611b9a03a9e0a3ad8fcbcdd5e705e"} Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.219449 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" event={"ID":"1f356802-8833-4a65-8e65-c9bab59c1080","Type":"ContainerStarted","Data":"27a7032ac46c99528520afc39b681726f6751fac14fce01f154410ac12bd1f35"} Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.220689 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" event={"ID":"71e40b76-d83e-41f5-a184-5d062a8291e4","Type":"ContainerStarted","Data":"46cc1de1057b17300e2e69c08633b9fa8b8c3c9cd4a7e8fad02ca87792b7ebc4"} Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.222512 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" event={"ID":"1326400f-df88-407f-807c-05182d879101","Type":"ContainerStarted","Data":"feda39a230b36d153ddd4244a2efee401df8523225294bc502b66865859aad4d"} Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.349748 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.356835 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.412596 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.418184 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.425367 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns"] Feb 26 22:14:16 crc kubenswrapper[4910]: W0226 22:14:16.427377 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a6c3a70_66b0_4b20_b4cd_fa1d8fbc228e.slice/crio-f7314ad478440ce6de9f9e44bbeee362e1fdceafa8e13233a98dc3d9a8bb820f WatchSource:0}: Error finding container f7314ad478440ce6de9f9e44bbeee362e1fdceafa8e13233a98dc3d9a8bb820f: Status 404 returned error can't find the container with id f7314ad478440ce6de9f9e44bbeee362e1fdceafa8e13233a98dc3d9a8bb820f Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.430142 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx"] Feb 26 22:14:16 crc kubenswrapper[4910]: W0226 22:14:16.440030 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3bc056_7a65_4188_b408_66892dbc6c86.slice/crio-8b01145de04f029cd6e2423f388a8e59a96f613e4d0c6b1362e3e1ee3a7351b3 WatchSource:0}: Error finding container 8b01145de04f029cd6e2423f388a8e59a96f613e4d0c6b1362e3e1ee3a7351b3: Status 404 returned error can't find the container with id 8b01145de04f029cd6e2423f388a8e59a96f613e4d0c6b1362e3e1ee3a7351b3 Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.615418 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.629208 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.639327 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-24d89"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.639798 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.655967 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.656372 4910 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.656427 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert podName:baa339f1-af25-4789-899c-b6ffed7c4ac0 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:18.656412643 +0000 UTC m=+1143.735903184 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert") pod "infra-operator-controller-manager-79d975b745-j9hll" (UID: "baa339f1-af25-4789-899c-b6ffed7c4ac0") : secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.657215 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m"] Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.677091 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brlps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-659dc6bbfc-r6zct_openstack-operators(d2d30287-5f0f-45fd-ae7f-23614ffab2fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.678221 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" podUID="d2d30287-5f0f-45fd-ae7f-23614ffab2fc" Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.682752 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4h2x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85bcd67d77-jrqfq_openstack-operators(4581d31c-adac-40f4-80ec-53142bc04c02): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.683843 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" podUID="4581d31c-adac-40f4-80ec-53142bc04c02" Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.685356 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s"] Feb 26 22:14:16 crc kubenswrapper[4910]: W0226 22:14:16.687466 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77b8f8f5_e1e3_4d68_80a1_fff99d000d3a.slice/crio-5248114999c4e2dcbe8b9e3a2f1f0771d4c356f6815844fb576f7dd2b2eaf721 WatchSource:0}: Error finding container 5248114999c4e2dcbe8b9e3a2f1f0771d4c356f6815844fb576f7dd2b2eaf721: Status 404 returned error can't find the container with id 5248114999c4e2dcbe8b9e3a2f1f0771d4c356f6815844fb576f7dd2b2eaf721 Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.697859 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.701507 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d"] Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.702848 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zgbhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-csc7d_openstack-operators(77b8f8f5-e1e3-4d68-80a1-fff99d000d3a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.703941 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" podUID="77b8f8f5-e1e3-4d68-80a1-fff99d000d3a" Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.757890 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.758057 4910 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.758107 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert podName:12f2404a-45bb-416e-b4d4-da70f869fbbf nodeName:}" failed. No retries permitted until 2026-02-26 22:14:18.758093617 +0000 UTC m=+1143.837584158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" (UID: "12f2404a-45bb-416e-b4d4-da70f869fbbf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.814072 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk"] Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.819677 4910 scope.go:117] "RemoveContainer" containerID="2d75876cea56b32846486fadf0e58dce016c09c7703aaa9f82d1e34b6fe08f5a" Feb 26 22:14:16 crc kubenswrapper[4910]: I0226 22:14:16.824004 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl"] Feb 26 22:14:16 crc kubenswrapper[4910]: W0226 22:14:16.830114 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b1b5fc5_da86_41ae_996e_9273627c5e62.slice/crio-07ce589fecacad9c06731347851a4a2d4b38814b15ab2c1aebea99ffd5c7a3f7 WatchSource:0}: Error finding container 07ce589fecacad9c06731347851a4a2d4b38814b15ab2c1aebea99ffd5c7a3f7: Status 404 returned error can't find the container with id 07ce589fecacad9c06731347851a4a2d4b38814b15ab2c1aebea99ffd5c7a3f7 Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.832582 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-98h86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-jjxzk_openstack-operators(3b1b5fc5-da86-41ae-996e-9273627c5e62): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 22:14:16 crc kubenswrapper[4910]: E0226 22:14:16.834708 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" podUID="3b1b5fc5-da86-41ae-996e-9273627c5e62" Feb 26 22:14:16 crc kubenswrapper[4910]: W0226 22:14:16.838441 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3c3500f_cc15_4b62_b13f_b99aeb97a413.slice/crio-c6f55d7d0e334a67982c6c6f1976e33b4c70f267ea253126adf821d42adb044d WatchSource:0}: Error finding container c6f55d7d0e334a67982c6c6f1976e33b4c70f267ea253126adf821d42adb044d: Status 404 returned error can't find the container with id c6f55d7d0e334a67982c6c6f1976e33b4c70f267ea253126adf821d42adb044d Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.164178 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.164214 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:17 crc kubenswrapper[4910]: E0226 22:14:17.164357 4910 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 22:14:17 crc kubenswrapper[4910]: E0226 22:14:17.164401 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:19.164388941 +0000 UTC m=+1144.243879482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "metrics-server-cert" not found Feb 26 22:14:17 crc kubenswrapper[4910]: E0226 22:14:17.164452 4910 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 22:14:17 crc kubenswrapper[4910]: E0226 22:14:17.164522 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:19.164503234 +0000 UTC m=+1144.243993775 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "webhook-server-cert" not found Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.234708 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" event={"ID":"f3c3500f-cc15-4b62-b13f-b99aeb97a413","Type":"ContainerStarted","Data":"c6f55d7d0e334a67982c6c6f1976e33b4c70f267ea253126adf821d42adb044d"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.236046 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" event={"ID":"1d3bc056-7a65-4188-b408-66892dbc6c86","Type":"ContainerStarted","Data":"8b01145de04f029cd6e2423f388a8e59a96f613e4d0c6b1362e3e1ee3a7351b3"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.238172 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" event={"ID":"aff3f8d4-51e9-4557-bc9d-497d587f667a","Type":"ContainerStarted","Data":"cc6da77fadf4d6e6e97742907d5037e1dfb573ced89185661a3e882905611e9f"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.241323 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" event={"ID":"97af21d2-e5ce-468b-bbf9-0e663577a30b","Type":"ContainerStarted","Data":"2ab04d916f9fa7756db0d19aa0935773b0fa4500890e4a80de7c27b34492e3f8"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.244293 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" event={"ID":"77b8f8f5-e1e3-4d68-80a1-fff99d000d3a","Type":"ContainerStarted","Data":"5248114999c4e2dcbe8b9e3a2f1f0771d4c356f6815844fb576f7dd2b2eaf721"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.245879 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" event={"ID":"9a4baffc-2491-42c4-838b-1ef90d643817","Type":"ContainerStarted","Data":"a0f9fac984f91ca2fac59cd1498a340ac041b38a8bde842e08578ec07afeaf06"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.247018 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" event={"ID":"ee918cc9-f4d9-49b1-9d9e-1d37c4aa7946","Type":"ContainerStarted","Data":"b0e991af58c1f3e941b05c094920872ad66fd0b23008e0fe85acc6a08ab57732"} Feb 26 22:14:17 crc kubenswrapper[4910]: E0226 22:14:17.247441 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" podUID="77b8f8f5-e1e3-4d68-80a1-fff99d000d3a" Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.249543 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" event={"ID":"6a6c3a70-66b0-4b20-b4cd-fa1d8fbc228e","Type":"ContainerStarted","Data":"f7314ad478440ce6de9f9e44bbeee362e1fdceafa8e13233a98dc3d9a8bb820f"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.252461 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" event={"ID":"d2d30287-5f0f-45fd-ae7f-23614ffab2fc","Type":"ContainerStarted","Data":"ef2952e8c0a5f4eb38b9d19c4552c6c2aeb8fefdac72bb0a00a04c207eff28c0"} Feb 26 22:14:17 crc kubenswrapper[4910]: E0226 22:14:17.253513 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" podUID="d2d30287-5f0f-45fd-ae7f-23614ffab2fc" Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.260487 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" event={"ID":"4581d31c-adac-40f4-80ec-53142bc04c02","Type":"ContainerStarted","Data":"3fd01d83b1943dd4bc90b9ddc94cc6baa70f45df1d63c2acb998951674919435"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.261758 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" event={"ID":"05b1662b-98cb-4867-9cf1-4272c173cf1f","Type":"ContainerStarted","Data":"1719b650358f563c4cf88772c8207827b9e0c59ba36f5cf81b2d7e62acb73e2d"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.264650 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" event={"ID":"3d9f959c-b8a6-415a-adf5-a20b0fbc3511","Type":"ContainerStarted","Data":"044308a57fadff78ae7437d8d176f2d94fa17e6f63ee25f4b4fdcc724ccf3976"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.266561 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" event={"ID":"3b1b5fc5-da86-41ae-996e-9273627c5e62","Type":"ContainerStarted","Data":"07ce589fecacad9c06731347851a4a2d4b38814b15ab2c1aebea99ffd5c7a3f7"} Feb 26 22:14:17 crc kubenswrapper[4910]: E0226 22:14:17.268659 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" podUID="4581d31c-adac-40f4-80ec-53142bc04c02" Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.268790 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" event={"ID":"1e643756-1a6a-4654-af77-5b9d0f1433f2","Type":"ContainerStarted","Data":"f1f6d0accfdae05013799b53c4c516f3c85ffad44162b634e50d7e72acb0d096"} Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.269841 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" event={"ID":"9fad2a9b-74b5-4bbf-a031-949aef704413","Type":"ContainerStarted","Data":"9dcdd23e711ff7c66d8fdf61fe66651f4e45074bd2bf2f80b5bbf2051f04f75a"} Feb 26 22:14:17 crc kubenswrapper[4910]: E0226 22:14:17.270136 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" podUID="3b1b5fc5-da86-41ae-996e-9273627c5e62" Feb 26 22:14:17 crc kubenswrapper[4910]: I0226 22:14:17.270910 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" event={"ID":"f93e1099-5db6-45f0-a344-5d05183572d1","Type":"ContainerStarted","Data":"395b0cfe47a5dcb868f58ef4c82fef6109b3770bfb8aa68ed88b5a5c1961f7c5"} Feb 26 22:14:18 crc kubenswrapper[4910]: E0226 22:14:18.279209 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" podUID="3b1b5fc5-da86-41ae-996e-9273627c5e62" Feb 26 22:14:18 crc kubenswrapper[4910]: E0226 22:14:18.279615 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" podUID="d2d30287-5f0f-45fd-ae7f-23614ffab2fc" Feb 26 22:14:18 crc kubenswrapper[4910]: E0226 22:14:18.279692 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" podUID="4581d31c-adac-40f4-80ec-53142bc04c02" Feb 26 22:14:18 crc kubenswrapper[4910]: E0226 22:14:18.279754 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" podUID="77b8f8f5-e1e3-4d68-80a1-fff99d000d3a" Feb 26 22:14:18 crc kubenswrapper[4910]: I0226 22:14:18.713300 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:18 crc kubenswrapper[4910]: E0226 22:14:18.713531 4910 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:18 crc kubenswrapper[4910]: E0226 22:14:18.713576 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert podName:baa339f1-af25-4789-899c-b6ffed7c4ac0 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:22.713562424 +0000 UTC m=+1147.793052965 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert") pod "infra-operator-controller-manager-79d975b745-j9hll" (UID: "baa339f1-af25-4789-899c-b6ffed7c4ac0") : secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:18 crc kubenswrapper[4910]: I0226 22:14:18.815267 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:18 crc kubenswrapper[4910]: E0226 22:14:18.815390 4910 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:18 crc kubenswrapper[4910]: E0226 22:14:18.815461 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert podName:12f2404a-45bb-416e-b4d4-da70f869fbbf nodeName:}" failed. No retries permitted until 2026-02-26 22:14:22.815442084 +0000 UTC m=+1147.894932625 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" (UID: "12f2404a-45bb-416e-b4d4-da70f869fbbf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:19 crc kubenswrapper[4910]: I0226 22:14:19.219975 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:19 crc kubenswrapper[4910]: I0226 22:14:19.220017 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:19 crc kubenswrapper[4910]: E0226 22:14:19.220109 4910 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 22:14:19 crc kubenswrapper[4910]: E0226 22:14:19.220176 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:23.220147754 +0000 UTC m=+1148.299638295 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "webhook-server-cert" not found Feb 26 22:14:19 crc kubenswrapper[4910]: E0226 22:14:19.220247 4910 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 22:14:19 crc kubenswrapper[4910]: E0226 22:14:19.220326 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:23.22030533 +0000 UTC m=+1148.299795871 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "metrics-server-cert" not found Feb 26 22:14:22 crc kubenswrapper[4910]: I0226 22:14:22.772425 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:22 crc kubenswrapper[4910]: E0226 22:14:22.772964 4910 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:22 crc kubenswrapper[4910]: E0226 22:14:22.773008 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert podName:baa339f1-af25-4789-899c-b6ffed7c4ac0 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:30.772994467 +0000 UTC m=+1155.852485008 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert") pod "infra-operator-controller-manager-79d975b745-j9hll" (UID: "baa339f1-af25-4789-899c-b6ffed7c4ac0") : secret "infra-operator-webhook-server-cert" not found Feb 26 22:14:22 crc kubenswrapper[4910]: I0226 22:14:22.874499 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:22 crc kubenswrapper[4910]: E0226 22:14:22.874754 4910 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:22 crc kubenswrapper[4910]: E0226 22:14:22.874844 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert podName:12f2404a-45bb-416e-b4d4-da70f869fbbf nodeName:}" failed. No retries permitted until 2026-02-26 22:14:30.874821545 +0000 UTC m=+1155.954312106 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" (UID: "12f2404a-45bb-416e-b4d4-da70f869fbbf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 22:14:23 crc kubenswrapper[4910]: I0226 22:14:23.280930 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:23 crc kubenswrapper[4910]: I0226 22:14:23.280970 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:23 crc kubenswrapper[4910]: E0226 22:14:23.281198 4910 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 22:14:23 crc kubenswrapper[4910]: E0226 22:14:23.281259 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:31.281229012 +0000 UTC m=+1156.360719553 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "metrics-server-cert" not found Feb 26 22:14:23 crc kubenswrapper[4910]: E0226 22:14:23.281524 4910 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 22:14:23 crc kubenswrapper[4910]: E0226 22:14:23.281590 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs podName:bfeb8151-7f06-4d68-a3a2-d4c267563b43 nodeName:}" failed. No retries permitted until 2026-02-26 22:14:31.281571581 +0000 UTC m=+1156.361062122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs") pod "openstack-operator-controller-manager-69ffc89494-n87vn" (UID: "bfeb8151-7f06-4d68-a3a2-d4c267563b43") : secret "webhook-server-cert" not found Feb 26 22:14:25 crc kubenswrapper[4910]: I0226 22:14:25.727484 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:14:25 crc kubenswrapper[4910]: I0226 22:14:25.727879 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:14:25 crc kubenswrapper[4910]: I0226 22:14:25.727967 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:14:25 crc kubenswrapper[4910]: I0226 22:14:25.728787 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ea55f05369c1b8e8cc6600dec4dd7568856f1c31173a49e14886d4d1e1c338d"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:14:25 crc kubenswrapper[4910]: I0226 22:14:25.728863 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://8ea55f05369c1b8e8cc6600dec4dd7568856f1c31173a49e14886d4d1e1c338d" gracePeriod=600 Feb 26 22:14:26 crc kubenswrapper[4910]: I0226 22:14:26.357027 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="8ea55f05369c1b8e8cc6600dec4dd7568856f1c31173a49e14886d4d1e1c338d" exitCode=0 Feb 26 22:14:26 crc kubenswrapper[4910]: I0226 22:14:26.357087 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"8ea55f05369c1b8e8cc6600dec4dd7568856f1c31173a49e14886d4d1e1c338d"} Feb 26 22:14:26 crc kubenswrapper[4910]: I0226 22:14:26.357131 4910 scope.go:117] "RemoveContainer" containerID="b8aa69230a8076fd0ec023976cd59eeefa38746d90bf2e2c7d3f40e007a0afc9" Feb 26 22:14:29 crc kubenswrapper[4910]: E0226 22:14:29.619109 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 26 22:14:29 crc kubenswrapper[4910]: E0226 22:14:29.619644 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8w8kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-mqd2x_openstack-operators(1f356802-8833-4a65-8e65-c9bab59c1080): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:14:29 crc kubenswrapper[4910]: E0226 22:14:29.620832 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" podUID="1f356802-8833-4a65-8e65-c9bab59c1080" Feb 26 22:14:30 crc kubenswrapper[4910]: E0226 22:14:30.317892 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 26 22:14:30 crc kubenswrapper[4910]: E0226 22:14:30.318465 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wv4g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-czqwc_openstack-operators(f93e1099-5db6-45f0-a344-5d05183572d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:14:30 crc kubenswrapper[4910]: E0226 22:14:30.319930 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" podUID="f93e1099-5db6-45f0-a344-5d05183572d1" Feb 26 22:14:30 crc kubenswrapper[4910]: E0226 22:14:30.391729 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" podUID="f93e1099-5db6-45f0-a344-5d05183572d1" Feb 26 22:14:30 crc kubenswrapper[4910]: E0226 22:14:30.392288 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" podUID="1f356802-8833-4a65-8e65-c9bab59c1080" Feb 26 22:14:30 crc kubenswrapper[4910]: I0226 22:14:30.825054 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:30 crc kubenswrapper[4910]: I0226 22:14:30.831583 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa339f1-af25-4789-899c-b6ffed7c4ac0-cert\") pod \"infra-operator-controller-manager-79d975b745-j9hll\" (UID: \"baa339f1-af25-4789-899c-b6ffed7c4ac0\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:30 crc kubenswrapper[4910]: I0226 22:14:30.926976 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:30 crc kubenswrapper[4910]: I0226 22:14:30.932586 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12f2404a-45bb-416e-b4d4-da70f869fbbf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2\" (UID: \"12f2404a-45bb-416e-b4d4-da70f869fbbf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.079376 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-br79p" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.089698 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.094629 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-krw5r" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.100956 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.336889 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.337261 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.340941 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-metrics-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.342035 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfeb8151-7f06-4d68-a3a2-d4c267563b43-webhook-certs\") pod \"openstack-operator-controller-manager-69ffc89494-n87vn\" (UID: \"bfeb8151-7f06-4d68-a3a2-d4c267563b43\") " pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.643954 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xw6nc" Feb 26 22:14:31 crc kubenswrapper[4910]: I0226 22:14:31.652529 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:32 crc kubenswrapper[4910]: E0226 22:14:32.038092 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 26 22:14:32 crc kubenswrapper[4910]: E0226 22:14:32.038250 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ndb7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-dkwl5_openstack-operators(1e643756-1a6a-4654-af77-5b9d0f1433f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:14:32 crc kubenswrapper[4910]: E0226 22:14:32.039513 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" podUID="1e643756-1a6a-4654-af77-5b9d0f1433f2" Feb 26 22:14:32 crc kubenswrapper[4910]: E0226 22:14:32.407898 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" podUID="1e643756-1a6a-4654-af77-5b9d0f1433f2" Feb 26 22:14:32 crc kubenswrapper[4910]: E0226 22:14:32.686422 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 26 22:14:32 crc kubenswrapper[4910]: E0226 22:14:32.686634 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hrnpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-24d89_openstack-operators(05b1662b-98cb-4867-9cf1-4272c173cf1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:14:32 crc kubenswrapper[4910]: E0226 22:14:32.687907 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" podUID="05b1662b-98cb-4867-9cf1-4272c173cf1f" Feb 26 22:14:33 crc kubenswrapper[4910]: E0226 22:14:33.415011 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" podUID="05b1662b-98cb-4867-9cf1-4272c173cf1f" Feb 26 22:14:38 crc kubenswrapper[4910]: E0226 22:14:38.956686 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 26 22:14:38 crc kubenswrapper[4910]: E0226 22:14:38.957321 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sk4p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-b7vkl_openstack-operators(f3c3500f-cc15-4b62-b13f-b99aeb97a413): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:14:38 crc kubenswrapper[4910]: E0226 22:14:38.959372 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" podUID="f3c3500f-cc15-4b62-b13f-b99aeb97a413" Feb 26 22:14:39 crc kubenswrapper[4910]: E0226 22:14:39.468321 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" podUID="f3c3500f-cc15-4b62-b13f-b99aeb97a413" Feb 26 22:14:40 crc kubenswrapper[4910]: I0226 22:14:40.160381 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-j9hll"] Feb 26 22:14:40 crc kubenswrapper[4910]: I0226 22:14:40.172431 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2"] Feb 26 22:14:40 crc kubenswrapper[4910]: I0226 22:14:40.265006 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn"] Feb 26 22:14:40 crc kubenswrapper[4910]: I0226 22:14:40.484065 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" event={"ID":"1326400f-df88-407f-807c-05182d879101","Type":"ContainerStarted","Data":"c6540c5c032ec8dec7c7581fb48e87de597f9902f078f4936e7dab60dfb0189d"} Feb 26 22:14:40 crc kubenswrapper[4910]: I0226 22:14:40.486905 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" Feb 26 22:14:40 crc kubenswrapper[4910]: I0226 22:14:40.495580 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"86111bdf5fb42a19cad2fb6eff7efddfcb0bd79e217fa1c7fe5451bfc269072f"} Feb 26 22:14:40 crc kubenswrapper[4910]: I0226 22:14:40.516826 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" podStartSLOduration=10.812051532 podStartE2EDuration="26.516800578s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:15.656112546 +0000 UTC m=+1140.735603087" lastFinishedPulling="2026-02-26 22:14:31.360861552 +0000 UTC m=+1156.440352133" observedRunningTime="2026-02-26 22:14:40.50230388 +0000 UTC m=+1165.581794461" watchObservedRunningTime="2026-02-26 22:14:40.516800578 +0000 UTC m=+1165.596291159" Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.508180 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" event={"ID":"1d3bc056-7a65-4188-b408-66892dbc6c86","Type":"ContainerStarted","Data":"725ab3fd65a15fe055807bc41c7ceee892d5750416a64b72092cb753b34a1606"} Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.508529 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.512019 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" event={"ID":"ee918cc9-f4d9-49b1-9d9e-1d37c4aa7946","Type":"ContainerStarted","Data":"0c78723b8af9bb42fc252934b72278fb733a38b808b44dd8076935089fa142e5"} Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.512204 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.513566 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" event={"ID":"6a6c3a70-66b0-4b20-b4cd-fa1d8fbc228e","Type":"ContainerStarted","Data":"71c56a4913f8ea8a92f9d911075a58cf3b3bc8173cc286a20c33eb4c214fc3c8"} Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.513686 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.514776 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" event={"ID":"19d23d2a-dce7-45c4-9cbd-ae14e8205aa7","Type":"ContainerStarted","Data":"5aaa7cdfd3435d7ba1406415ffb218f84abde36b8e98a0de5b398d0266ff3a32"} Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.514860 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.517890 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" event={"ID":"3d9f959c-b8a6-415a-adf5-a20b0fbc3511","Type":"ContainerStarted","Data":"c7235796363e99803294676882e3bb33b9c0af5511f9811e1077baf42936841e"} Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.530131 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" podStartSLOduration=11.962347437 podStartE2EDuration="27.530112811s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.442320482 +0000 UTC m=+1141.521811023" lastFinishedPulling="2026-02-26 22:14:32.010085856 +0000 UTC m=+1157.089576397" observedRunningTime="2026-02-26 22:14:41.525890834 +0000 UTC m=+1166.605381375" watchObservedRunningTime="2026-02-26 22:14:41.530112811 +0000 UTC m=+1166.609603352" Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.561995 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" podStartSLOduration=6.20439059 podStartE2EDuration="27.561980602s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.046942617 +0000 UTC m=+1141.126433158" lastFinishedPulling="2026-02-26 22:14:37.404532589 +0000 UTC m=+1162.484023170" observedRunningTime="2026-02-26 22:14:41.556350578 +0000 UTC m=+1166.635841119" watchObservedRunningTime="2026-02-26 22:14:41.561980602 +0000 UTC m=+1166.641471143" Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.576366 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" podStartSLOduration=6.604553007 podStartE2EDuration="27.576340686s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.432850923 +0000 UTC m=+1141.512341464" lastFinishedPulling="2026-02-26 22:14:37.404638602 +0000 UTC m=+1162.484129143" observedRunningTime="2026-02-26 22:14:41.574851505 +0000 UTC m=+1166.654342056" watchObservedRunningTime="2026-02-26 22:14:41.576340686 +0000 UTC m=+1166.655831227" Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.601414 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" podStartSLOduration=5.34422158 podStartE2EDuration="27.601392971s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.43531118 +0000 UTC m=+1141.514801721" lastFinishedPulling="2026-02-26 22:14:38.692482531 +0000 UTC m=+1163.771973112" observedRunningTime="2026-02-26 22:14:41.59842475 +0000 UTC m=+1166.677915301" watchObservedRunningTime="2026-02-26 22:14:41.601392971 +0000 UTC m=+1166.680883512" Feb 26 22:14:41 crc kubenswrapper[4910]: I0226 22:14:41.617251 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" podStartSLOduration=11.985384486 podStartE2EDuration="27.617232976s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.377544618 +0000 UTC m=+1141.457035159" lastFinishedPulling="2026-02-26 22:14:32.009393108 +0000 UTC m=+1157.088883649" observedRunningTime="2026-02-26 22:14:41.614924342 +0000 UTC m=+1166.694414883" watchObservedRunningTime="2026-02-26 22:14:41.617232976 +0000 UTC m=+1166.696723517" Feb 26 22:14:42 crc kubenswrapper[4910]: W0226 22:14:42.057394 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfeb8151_7f06_4d68_a3a2_d4c267563b43.slice/crio-4ac7431409030176cd5fc7d56412ad0ba848a929d05450869367610f9662e924 WatchSource:0}: Error finding container 4ac7431409030176cd5fc7d56412ad0ba848a929d05450869367610f9662e924: Status 404 returned error can't find the container with id 4ac7431409030176cd5fc7d56412ad0ba848a929d05450869367610f9662e924 Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.536973 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" event={"ID":"12f2404a-45bb-416e-b4d4-da70f869fbbf","Type":"ContainerStarted","Data":"03ab70c41b2f4db987088c6a718b15dc47f4ae43e7ad9526c1baf25ef9791f4d"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.542454 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" event={"ID":"3b1b5fc5-da86-41ae-996e-9273627c5e62","Type":"ContainerStarted","Data":"889185b2dfbd1ba0c6be16c8fffbe4a50433d90e11624038826318d4cf3b1781"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.542705 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.546210 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" event={"ID":"9fad2a9b-74b5-4bbf-a031-949aef704413","Type":"ContainerStarted","Data":"5332e780a77ebbd1f5ad15b02d45be8f4ae134d037ffc96dd5eeb5a2d6797ff5"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.546456 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.547569 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" event={"ID":"77b8f8f5-e1e3-4d68-80a1-fff99d000d3a","Type":"ContainerStarted","Data":"7b3379250c331330478762b29e0ec89c8c223d90f71f0efdad696f03487bbb31"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.547696 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.555922 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" event={"ID":"4581d31c-adac-40f4-80ec-53142bc04c02","Type":"ContainerStarted","Data":"fbd5c143999d88ebb0ff8eeb671fe0216c624be803ebe370d99656f1fac8e994"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.556195 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.567858 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" podStartSLOduration=4.736340396 podStartE2EDuration="27.567842642s" podCreationTimestamp="2026-02-26 22:14:15 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.832425882 +0000 UTC m=+1141.911916423" lastFinishedPulling="2026-02-26 22:14:39.663928128 +0000 UTC m=+1164.743418669" observedRunningTime="2026-02-26 22:14:42.563307847 +0000 UTC m=+1167.642798388" watchObservedRunningTime="2026-02-26 22:14:42.567842642 +0000 UTC m=+1167.647333183" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.584617 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" podStartSLOduration=3.072015252 podStartE2EDuration="28.584601651s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.702730682 +0000 UTC m=+1141.782221223" lastFinishedPulling="2026-02-26 22:14:42.215317081 +0000 UTC m=+1167.294807622" observedRunningTime="2026-02-26 22:14:42.58128773 +0000 UTC m=+1167.660778271" watchObservedRunningTime="2026-02-26 22:14:42.584601651 +0000 UTC m=+1167.664092182" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.590400 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" event={"ID":"bfeb8151-7f06-4d68-a3a2-d4c267563b43","Type":"ContainerStarted","Data":"4ac7431409030176cd5fc7d56412ad0ba848a929d05450869367610f9662e924"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.591022 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.592143 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" event={"ID":"baa339f1-af25-4789-899c-b6ffed7c4ac0","Type":"ContainerStarted","Data":"ae3d19c46351d5384b70a709abccf192f9f2fbf94ff2d4debb62278478fd7302"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.593143 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" event={"ID":"9a4baffc-2491-42c4-838b-1ef90d643817","Type":"ContainerStarted","Data":"30967a871dfaf4ecc2f9d54b208bb313f5f0c9e7e8546e295aa5f74732dd69a1"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.593544 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.594533 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" event={"ID":"aff3f8d4-51e9-4557-bc9d-497d587f667a","Type":"ContainerStarted","Data":"7d6243c01c5d5b9f26dd5ebbec4bca58e0c446ea65ef5f719fd6fbe8208a46cf"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.594855 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.599875 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" podStartSLOduration=3.44710935 podStartE2EDuration="27.599865619s" podCreationTimestamp="2026-02-26 22:14:15 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.682648291 +0000 UTC m=+1141.762138832" lastFinishedPulling="2026-02-26 22:14:40.83540456 +0000 UTC m=+1165.914895101" observedRunningTime="2026-02-26 22:14:42.595420747 +0000 UTC m=+1167.674911288" watchObservedRunningTime="2026-02-26 22:14:42.599865619 +0000 UTC m=+1167.679356150" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.656738 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" event={"ID":"97af21d2-e5ce-468b-bbf9-0e663577a30b","Type":"ContainerStarted","Data":"582295217bedad1b9595d9ff2fe4f15e3e8dc52a69abbf50e76bc8b56f5d35dd"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.657047 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.670587 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" event={"ID":"71e40b76-d83e-41f5-a184-5d062a8291e4","Type":"ContainerStarted","Data":"7c05d5507f425d69909951f0d2d206cb539c5f1c2b49406ce4556c0a8c9640c7"} Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.671603 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.672372 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" podStartSLOduration=4.799206617 podStartE2EDuration="27.672347993s" podCreationTimestamp="2026-02-26 22:14:15 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.675816024 +0000 UTC m=+1141.755306565" lastFinishedPulling="2026-02-26 22:14:39.5489574 +0000 UTC m=+1164.628447941" observedRunningTime="2026-02-26 22:14:42.647458762 +0000 UTC m=+1167.726949323" watchObservedRunningTime="2026-02-26 22:14:42.672347993 +0000 UTC m=+1167.751838534" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.681778 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" podStartSLOduration=6.367564639 podStartE2EDuration="28.681763831s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.676663368 +0000 UTC m=+1141.756153909" lastFinishedPulling="2026-02-26 22:14:38.99086256 +0000 UTC m=+1164.070353101" observedRunningTime="2026-02-26 22:14:42.680056554 +0000 UTC m=+1167.759547115" watchObservedRunningTime="2026-02-26 22:14:42.681763831 +0000 UTC m=+1167.761254372" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.748442 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" podStartSLOduration=27.748415926 podStartE2EDuration="27.748415926s" podCreationTimestamp="2026-02-26 22:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:14:42.712505103 +0000 UTC m=+1167.791995644" watchObservedRunningTime="2026-02-26 22:14:42.748415926 +0000 UTC m=+1167.827906467" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.750463 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" podStartSLOduration=6.435484739 podStartE2EDuration="28.750456032s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.676593296 +0000 UTC m=+1141.756083837" lastFinishedPulling="2026-02-26 22:14:38.991564559 +0000 UTC m=+1164.071055130" observedRunningTime="2026-02-26 22:14:42.74563351 +0000 UTC m=+1167.825124051" watchObservedRunningTime="2026-02-26 22:14:42.750456032 +0000 UTC m=+1167.829946573" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.818426 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" podStartSLOduration=6.56702351 podStartE2EDuration="28.818411262s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.441098109 +0000 UTC m=+1141.520588650" lastFinishedPulling="2026-02-26 22:14:38.692485821 +0000 UTC m=+1163.771976402" observedRunningTime="2026-02-26 22:14:42.781194163 +0000 UTC m=+1167.860684704" watchObservedRunningTime="2026-02-26 22:14:42.818411262 +0000 UTC m=+1167.897901803" Feb 26 22:14:42 crc kubenswrapper[4910]: I0226 22:14:42.818810 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" podStartSLOduration=7.258162411 podStartE2EDuration="28.818805873s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:15.843820505 +0000 UTC m=+1140.923311046" lastFinishedPulling="2026-02-26 22:14:37.404463927 +0000 UTC m=+1162.483954508" observedRunningTime="2026-02-26 22:14:42.816424348 +0000 UTC m=+1167.895914889" watchObservedRunningTime="2026-02-26 22:14:42.818805873 +0000 UTC m=+1167.898296414" Feb 26 22:14:43 crc kubenswrapper[4910]: I0226 22:14:43.688134 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" event={"ID":"1f356802-8833-4a65-8e65-c9bab59c1080","Type":"ContainerStarted","Data":"235c7891a10ec02e7a70b0a31b5885185e963700f72a68feba876f63329bc771"} Feb 26 22:14:43 crc kubenswrapper[4910]: I0226 22:14:43.689045 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" Feb 26 22:14:43 crc kubenswrapper[4910]: I0226 22:14:43.698458 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" event={"ID":"bfeb8151-7f06-4d68-a3a2-d4c267563b43","Type":"ContainerStarted","Data":"cb9c6046e6d7e089c1a2ae19f6ca7c4e2d313b74b80d7d4a794daad3810e16dd"} Feb 26 22:14:43 crc kubenswrapper[4910]: I0226 22:14:43.704069 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" event={"ID":"d2d30287-5f0f-45fd-ae7f-23614ffab2fc","Type":"ContainerStarted","Data":"9c1ac12f41548bd284a551d1e286a329dcd1b7e5082260b2d4b59dc82b9647c6"} Feb 26 22:14:43 crc kubenswrapper[4910]: I0226 22:14:43.704416 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" Feb 26 22:14:43 crc kubenswrapper[4910]: I0226 22:14:43.705649 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" Feb 26 22:14:43 crc kubenswrapper[4910]: I0226 22:14:43.718205 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" podStartSLOduration=3.363732488 podStartE2EDuration="29.718192727s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:15.973128505 +0000 UTC m=+1141.052619046" lastFinishedPulling="2026-02-26 22:14:42.327588734 +0000 UTC m=+1167.407079285" observedRunningTime="2026-02-26 22:14:43.716723277 +0000 UTC m=+1168.796213818" watchObservedRunningTime="2026-02-26 22:14:43.718192727 +0000 UTC m=+1168.797683268" Feb 26 22:14:43 crc kubenswrapper[4910]: I0226 22:14:43.740282 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" podStartSLOduration=4.200499417 podStartE2EDuration="29.740261901s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.676904524 +0000 UTC m=+1141.756395055" lastFinishedPulling="2026-02-26 22:14:42.216666998 +0000 UTC m=+1167.296157539" observedRunningTime="2026-02-26 22:14:43.737094005 +0000 UTC m=+1168.816584546" watchObservedRunningTime="2026-02-26 22:14:43.740261901 +0000 UTC m=+1168.819752442" Feb 26 22:14:44 crc kubenswrapper[4910]: I0226 22:14:44.967669 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-7n5lg" Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.725112 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" event={"ID":"12f2404a-45bb-416e-b4d4-da70f869fbbf","Type":"ContainerStarted","Data":"4a4eda925d0aa900c4073b6b1f725ecf38b8663f481084125ae5d4b5ddada463"} Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.725623 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.726499 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" event={"ID":"1e643756-1a6a-4654-af77-5b9d0f1433f2","Type":"ContainerStarted","Data":"407d92fcfaab9351eb3f0828cc84c42d0c6a73d76f0704af4c86d5752c69da02"} Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.726686 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.728513 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" event={"ID":"baa339f1-af25-4789-899c-b6ffed7c4ac0","Type":"ContainerStarted","Data":"b396508dcbba5c4392bbbb86903c0e6b4ef579134fb666a28fbccadd1d1aae5f"} Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.728623 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.729839 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" event={"ID":"f93e1099-5db6-45f0-a344-5d05183572d1","Type":"ContainerStarted","Data":"725b8914132590fff549d66458a0bdb66fc9f5573188c1b678779be36e6c4b44"} Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.729984 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.770550 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" podStartSLOduration=28.813487496 podStartE2EDuration="32.770536156s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:42.08857931 +0000 UTC m=+1167.168069851" lastFinishedPulling="2026-02-26 22:14:46.04562796 +0000 UTC m=+1171.125118511" observedRunningTime="2026-02-26 22:14:46.762790594 +0000 UTC m=+1171.842281135" watchObservedRunningTime="2026-02-26 22:14:46.770536156 +0000 UTC m=+1171.850026697" Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.789805 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" podStartSLOduration=3.029471107 podStartE2EDuration="32.789786523s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.627368558 +0000 UTC m=+1141.706859099" lastFinishedPulling="2026-02-26 22:14:46.387683974 +0000 UTC m=+1171.467174515" observedRunningTime="2026-02-26 22:14:46.783652436 +0000 UTC m=+1171.863142977" watchObservedRunningTime="2026-02-26 22:14:46.789786523 +0000 UTC m=+1171.869277064" Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.809613 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" podStartSLOduration=28.837516655 podStartE2EDuration="32.809597065s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:42.088054486 +0000 UTC m=+1167.167545027" lastFinishedPulling="2026-02-26 22:14:46.060134906 +0000 UTC m=+1171.139625437" observedRunningTime="2026-02-26 22:14:46.802255345 +0000 UTC m=+1171.881745876" watchObservedRunningTime="2026-02-26 22:14:46.809597065 +0000 UTC m=+1171.889087606" Feb 26 22:14:46 crc kubenswrapper[4910]: I0226 22:14:46.824350 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" podStartSLOduration=3.149687938 podStartE2EDuration="32.824330549s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.370988849 +0000 UTC m=+1141.450479390" lastFinishedPulling="2026-02-26 22:14:46.04563142 +0000 UTC m=+1171.125122001" observedRunningTime="2026-02-26 22:14:46.821349918 +0000 UTC m=+1171.900840459" watchObservedRunningTime="2026-02-26 22:14:46.824330549 +0000 UTC m=+1171.903821080" Feb 26 22:14:47 crc kubenswrapper[4910]: I0226 22:14:47.737596 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" event={"ID":"05b1662b-98cb-4867-9cf1-4272c173cf1f","Type":"ContainerStarted","Data":"7fbe23beef0fdc021fec03c587d267e1cf2902571f6988cc66fb2e459d94445a"} Feb 26 22:14:47 crc kubenswrapper[4910]: I0226 22:14:47.738092 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" Feb 26 22:14:47 crc kubenswrapper[4910]: I0226 22:14:47.752855 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" podStartSLOduration=3.069734979 podStartE2EDuration="33.75283653s" podCreationTimestamp="2026-02-26 22:14:14 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.675546897 +0000 UTC m=+1141.755037438" lastFinishedPulling="2026-02-26 22:14:47.358648438 +0000 UTC m=+1172.438138989" observedRunningTime="2026-02-26 22:14:47.749840539 +0000 UTC m=+1172.829331090" watchObservedRunningTime="2026-02-26 22:14:47.75283653 +0000 UTC m=+1172.832327091" Feb 26 22:14:51 crc kubenswrapper[4910]: I0226 22:14:51.098852 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2" Feb 26 22:14:51 crc kubenswrapper[4910]: I0226 22:14:51.108421 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j9hll" Feb 26 22:14:51 crc kubenswrapper[4910]: I0226 22:14:51.662859 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ffc89494-n87vn" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.044049 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" event={"ID":"f3c3500f-cc15-4b62-b13f-b99aeb97a413","Type":"ContainerStarted","Data":"13228554c32a44a1f9b62a0720b9a4c6d8c086a08c91af998291eadafc3eb30f"} Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.049773 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-l7m5c" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.072306 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b7vkl" podStartSLOduration=2.607074481 podStartE2EDuration="40.072288977s" podCreationTimestamp="2026-02-26 22:14:15 +0000 UTC" firstStartedPulling="2026-02-26 22:14:16.858659571 +0000 UTC m=+1141.938150112" lastFinishedPulling="2026-02-26 22:14:54.323874027 +0000 UTC m=+1179.403364608" observedRunningTime="2026-02-26 22:14:55.071598849 +0000 UTC m=+1180.151089420" watchObservedRunningTime="2026-02-26 22:14:55.072288977 +0000 UTC m=+1180.151779518" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.094385 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mqd2x" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.181795 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-lkjvb" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.212822 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6pf2w" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.257414 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-dkwl5" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.274378 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nrbns" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.285448 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t4bqx" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.315583 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dt9gd" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.365249 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-v9ll5" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.378917 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-smv6m" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.401611 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-24d89" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.410436 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-csc7d" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.448094 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-r6zct" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.474643 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-vsr8s" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.500253 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czqwc" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.524513 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85bcd67d77-jrqfq" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.621373 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4q6xr" Feb 26 22:14:55 crc kubenswrapper[4910]: I0226 22:14:55.721999 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jjxzk" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.149604 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw"] Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.151666 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.156224 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.157456 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.171916 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw"] Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.238406 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e84cab4-c7ab-4e27-92f7-d908fca3c538-secret-volume\") pod \"collect-profiles-29535735-6z7zw\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.238637 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv4hq\" (UniqueName: \"kubernetes.io/projected/8e84cab4-c7ab-4e27-92f7-d908fca3c538-kube-api-access-qv4hq\") pod \"collect-profiles-29535735-6z7zw\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.238697 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e84cab4-c7ab-4e27-92f7-d908fca3c538-config-volume\") pod \"collect-profiles-29535735-6z7zw\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.340501 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e84cab4-c7ab-4e27-92f7-d908fca3c538-secret-volume\") pod \"collect-profiles-29535735-6z7zw\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.340604 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv4hq\" (UniqueName: \"kubernetes.io/projected/8e84cab4-c7ab-4e27-92f7-d908fca3c538-kube-api-access-qv4hq\") pod \"collect-profiles-29535735-6z7zw\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.340636 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e84cab4-c7ab-4e27-92f7-d908fca3c538-config-volume\") pod \"collect-profiles-29535735-6z7zw\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.341645 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e84cab4-c7ab-4e27-92f7-d908fca3c538-config-volume\") pod \"collect-profiles-29535735-6z7zw\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.356471 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e84cab4-c7ab-4e27-92f7-d908fca3c538-secret-volume\") pod \"collect-profiles-29535735-6z7zw\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.364387 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv4hq\" (UniqueName: \"kubernetes.io/projected/8e84cab4-c7ab-4e27-92f7-d908fca3c538-kube-api-access-qv4hq\") pod \"collect-profiles-29535735-6z7zw\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.488103 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:00 crc kubenswrapper[4910]: I0226 22:15:00.957259 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw"] Feb 26 22:15:00 crc kubenswrapper[4910]: W0226 22:15:00.969508 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e84cab4_c7ab_4e27_92f7_d908fca3c538.slice/crio-738175e004ff97596830553f4c3a9540c6f8f49f97cb8c734bf0b9482935038e WatchSource:0}: Error finding container 738175e004ff97596830553f4c3a9540c6f8f49f97cb8c734bf0b9482935038e: Status 404 returned error can't find the container with id 738175e004ff97596830553f4c3a9540c6f8f49f97cb8c734bf0b9482935038e Feb 26 22:15:01 crc kubenswrapper[4910]: I0226 22:15:01.109327 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" event={"ID":"8e84cab4-c7ab-4e27-92f7-d908fca3c538","Type":"ContainerStarted","Data":"738175e004ff97596830553f4c3a9540c6f8f49f97cb8c734bf0b9482935038e"} Feb 26 22:15:02 crc kubenswrapper[4910]: I0226 22:15:02.121614 4910 generic.go:334] "Generic (PLEG): container finished" podID="8e84cab4-c7ab-4e27-92f7-d908fca3c538" containerID="e934e2987550b4a3a8eb155b5161985be24f9607a4b3c5f3a3c67e56f8da9634" exitCode=0 Feb 26 22:15:02 crc kubenswrapper[4910]: I0226 22:15:02.121755 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" event={"ID":"8e84cab4-c7ab-4e27-92f7-d908fca3c538","Type":"ContainerDied","Data":"e934e2987550b4a3a8eb155b5161985be24f9607a4b3c5f3a3c67e56f8da9634"} Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.483583 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.595875 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e84cab4-c7ab-4e27-92f7-d908fca3c538-secret-volume\") pod \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.596006 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv4hq\" (UniqueName: \"kubernetes.io/projected/8e84cab4-c7ab-4e27-92f7-d908fca3c538-kube-api-access-qv4hq\") pod \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.596299 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e84cab4-c7ab-4e27-92f7-d908fca3c538-config-volume\") pod \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\" (UID: \"8e84cab4-c7ab-4e27-92f7-d908fca3c538\") " Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.597332 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e84cab4-c7ab-4e27-92f7-d908fca3c538-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e84cab4-c7ab-4e27-92f7-d908fca3c538" (UID: "8e84cab4-c7ab-4e27-92f7-d908fca3c538"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.602230 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e84cab4-c7ab-4e27-92f7-d908fca3c538-kube-api-access-qv4hq" (OuterVolumeSpecName: "kube-api-access-qv4hq") pod "8e84cab4-c7ab-4e27-92f7-d908fca3c538" (UID: "8e84cab4-c7ab-4e27-92f7-d908fca3c538"). InnerVolumeSpecName "kube-api-access-qv4hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.602383 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e84cab4-c7ab-4e27-92f7-d908fca3c538-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e84cab4-c7ab-4e27-92f7-d908fca3c538" (UID: "8e84cab4-c7ab-4e27-92f7-d908fca3c538"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.698268 4910 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e84cab4-c7ab-4e27-92f7-d908fca3c538-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.698330 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv4hq\" (UniqueName: \"kubernetes.io/projected/8e84cab4-c7ab-4e27-92f7-d908fca3c538-kube-api-access-qv4hq\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:03 crc kubenswrapper[4910]: I0226 22:15:03.698350 4910 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e84cab4-c7ab-4e27-92f7-d908fca3c538-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:04 crc kubenswrapper[4910]: I0226 22:15:04.144737 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" event={"ID":"8e84cab4-c7ab-4e27-92f7-d908fca3c538","Type":"ContainerDied","Data":"738175e004ff97596830553f4c3a9540c6f8f49f97cb8c734bf0b9482935038e"} Feb 26 22:15:04 crc kubenswrapper[4910]: I0226 22:15:04.144809 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="738175e004ff97596830553f4c3a9540c6f8f49f97cb8c734bf0b9482935038e" Feb 26 22:15:04 crc kubenswrapper[4910]: I0226 22:15:04.144829 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.081028 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkzht"] Feb 26 22:15:14 crc kubenswrapper[4910]: E0226 22:15:14.081992 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e84cab4-c7ab-4e27-92f7-d908fca3c538" containerName="collect-profiles" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.082011 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e84cab4-c7ab-4e27-92f7-d908fca3c538" containerName="collect-profiles" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.082214 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e84cab4-c7ab-4e27-92f7-d908fca3c538" containerName="collect-profiles" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.083179 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.087318 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.087520 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.087724 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.087920 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fr8zk" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.093903 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/648c2ee7-6abc-404d-b181-c9f6047ef56e-config\") pod \"dnsmasq-dns-675f4bcbfc-tkzht\" (UID: \"648c2ee7-6abc-404d-b181-c9f6047ef56e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.094055 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6gz\" (UniqueName: \"kubernetes.io/projected/648c2ee7-6abc-404d-b181-c9f6047ef56e-kube-api-access-4s6gz\") pod \"dnsmasq-dns-675f4bcbfc-tkzht\" (UID: \"648c2ee7-6abc-404d-b181-c9f6047ef56e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.095456 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkzht"] Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.161651 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bkf76"] Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.164723 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.166541 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.177301 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bkf76"] Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.197309 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-config\") pod \"dnsmasq-dns-78dd6ddcc-bkf76\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.197479 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dltr\" (UniqueName: \"kubernetes.io/projected/3feed117-9b62-4ea3-8c99-b588473c5042-kube-api-access-9dltr\") pod \"dnsmasq-dns-78dd6ddcc-bkf76\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.197513 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-bkf76\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.197738 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s6gz\" (UniqueName: \"kubernetes.io/projected/648c2ee7-6abc-404d-b181-c9f6047ef56e-kube-api-access-4s6gz\") pod \"dnsmasq-dns-675f4bcbfc-tkzht\" (UID: \"648c2ee7-6abc-404d-b181-c9f6047ef56e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.197792 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/648c2ee7-6abc-404d-b181-c9f6047ef56e-config\") pod \"dnsmasq-dns-675f4bcbfc-tkzht\" (UID: \"648c2ee7-6abc-404d-b181-c9f6047ef56e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.198810 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/648c2ee7-6abc-404d-b181-c9f6047ef56e-config\") pod \"dnsmasq-dns-675f4bcbfc-tkzht\" (UID: \"648c2ee7-6abc-404d-b181-c9f6047ef56e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.218297 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s6gz\" (UniqueName: \"kubernetes.io/projected/648c2ee7-6abc-404d-b181-c9f6047ef56e-kube-api-access-4s6gz\") pod \"dnsmasq-dns-675f4bcbfc-tkzht\" (UID: \"648c2ee7-6abc-404d-b181-c9f6047ef56e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.298879 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-config\") pod \"dnsmasq-dns-78dd6ddcc-bkf76\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.298939 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dltr\" (UniqueName: \"kubernetes.io/projected/3feed117-9b62-4ea3-8c99-b588473c5042-kube-api-access-9dltr\") pod \"dnsmasq-dns-78dd6ddcc-bkf76\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.298960 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-bkf76\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.299785 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-bkf76\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.300289 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-config\") pod \"dnsmasq-dns-78dd6ddcc-bkf76\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.326791 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dltr\" (UniqueName: \"kubernetes.io/projected/3feed117-9b62-4ea3-8c99-b588473c5042-kube-api-access-9dltr\") pod \"dnsmasq-dns-78dd6ddcc-bkf76\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.413762 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.492387 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.772995 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bkf76"] Feb 26 22:15:14 crc kubenswrapper[4910]: I0226 22:15:14.945055 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkzht"] Feb 26 22:15:15 crc kubenswrapper[4910]: I0226 22:15:15.256591 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" event={"ID":"3feed117-9b62-4ea3-8c99-b588473c5042","Type":"ContainerStarted","Data":"3fe1ce8e35fa13cd88a78e15be8ec9ddc07e8a0f21f6bbb34ed850de9be514ac"} Feb 26 22:15:15 crc kubenswrapper[4910]: I0226 22:15:15.259031 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" event={"ID":"648c2ee7-6abc-404d-b181-c9f6047ef56e","Type":"ContainerStarted","Data":"797dd7276a85540ba1df5ee760138064484c0005a2fce36f50bdc25ca4c2e883"} Feb 26 22:15:16 crc kubenswrapper[4910]: I0226 22:15:16.898840 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkzht"] Feb 26 22:15:16 crc kubenswrapper[4910]: I0226 22:15:16.928918 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g79xb"] Feb 26 22:15:16 crc kubenswrapper[4910]: I0226 22:15:16.929985 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:16 crc kubenswrapper[4910]: I0226 22:15:16.954896 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g79xb"] Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.054216 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-config\") pod \"dnsmasq-dns-666b6646f7-g79xb\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.054294 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8cv\" (UniqueName: \"kubernetes.io/projected/90d460eb-2bb4-4c1b-a89d-04d1313985f4-kube-api-access-sb8cv\") pod \"dnsmasq-dns-666b6646f7-g79xb\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.054326 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-g79xb\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.156067 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8cv\" (UniqueName: \"kubernetes.io/projected/90d460eb-2bb4-4c1b-a89d-04d1313985f4-kube-api-access-sb8cv\") pod \"dnsmasq-dns-666b6646f7-g79xb\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.156128 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-g79xb\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.156228 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-config\") pod \"dnsmasq-dns-666b6646f7-g79xb\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.157211 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-config\") pod \"dnsmasq-dns-666b6646f7-g79xb\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.157396 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-g79xb\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.208630 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8cv\" (UniqueName: \"kubernetes.io/projected/90d460eb-2bb4-4c1b-a89d-04d1313985f4-kube-api-access-sb8cv\") pod \"dnsmasq-dns-666b6646f7-g79xb\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.256342 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.273325 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bkf76"] Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.304099 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gjwx5"] Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.305350 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.317772 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gjwx5"] Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.461449 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gjwx5\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.461510 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lgb\" (UniqueName: \"kubernetes.io/projected/e1dd3507-57fc-4def-b3ef-41ce5a23f786-kube-api-access-k9lgb\") pod \"dnsmasq-dns-57d769cc4f-gjwx5\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.461570 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-config\") pod \"dnsmasq-dns-57d769cc4f-gjwx5\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.563960 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gjwx5\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.564270 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lgb\" (UniqueName: \"kubernetes.io/projected/e1dd3507-57fc-4def-b3ef-41ce5a23f786-kube-api-access-k9lgb\") pod \"dnsmasq-dns-57d769cc4f-gjwx5\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.564398 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-config\") pod \"dnsmasq-dns-57d769cc4f-gjwx5\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.565298 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-config\") pod \"dnsmasq-dns-57d769cc4f-gjwx5\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.566088 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gjwx5\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.582242 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lgb\" (UniqueName: \"kubernetes.io/projected/e1dd3507-57fc-4def-b3ef-41ce5a23f786-kube-api-access-k9lgb\") pod \"dnsmasq-dns-57d769cc4f-gjwx5\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.683835 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:17 crc kubenswrapper[4910]: I0226 22:15:17.847675 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g79xb"] Feb 26 22:15:17 crc kubenswrapper[4910]: W0226 22:15:17.869085 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90d460eb_2bb4_4c1b_a89d_04d1313985f4.slice/crio-4089d4b3aa04fff80383851f04ee33574f15a53b27e7adf089b309b273756e55 WatchSource:0}: Error finding container 4089d4b3aa04fff80383851f04ee33574f15a53b27e7adf089b309b273756e55: Status 404 returned error can't find the container with id 4089d4b3aa04fff80383851f04ee33574f15a53b27e7adf089b309b273756e55 Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.113291 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gjwx5"] Feb 26 22:15:18 crc kubenswrapper[4910]: W0226 22:15:18.119226 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1dd3507_57fc_4def_b3ef_41ce5a23f786.slice/crio-24d88a76c0d22a6e593c562cab1c14ca2d1ebb155b3601c0d135653b00b90bf9 WatchSource:0}: Error finding container 24d88a76c0d22a6e593c562cab1c14ca2d1ebb155b3601c0d135653b00b90bf9: Status 404 returned error can't find the container with id 24d88a76c0d22a6e593c562cab1c14ca2d1ebb155b3601c0d135653b00b90bf9 Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.142765 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.144267 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.146507 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.146723 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.147182 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.147391 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.147887 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.148377 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.148843 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ddj2c" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.162490 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.276598 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.276648 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48cec592-3a36-46fc-813d-bf8fa5212e89-pod-info\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.276695 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48cec592-3a36-46fc-813d-bf8fa5212e89-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.276719 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-config-data\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.276740 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.276756 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.276963 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.277035 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.277095 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-server-conf\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.277111 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kst6m\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-kube-api-access-kst6m\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.277135 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.306700 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-g79xb" event={"ID":"90d460eb-2bb4-4c1b-a89d-04d1313985f4","Type":"ContainerStarted","Data":"4089d4b3aa04fff80383851f04ee33574f15a53b27e7adf089b309b273756e55"} Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.309984 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" event={"ID":"e1dd3507-57fc-4def-b3ef-41ce5a23f786","Type":"ContainerStarted","Data":"24d88a76c0d22a6e593c562cab1c14ca2d1ebb155b3601c0d135653b00b90bf9"} Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378474 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378531 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378561 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-server-conf\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378666 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kst6m\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-kube-api-access-kst6m\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378691 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378738 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378766 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48cec592-3a36-46fc-813d-bf8fa5212e89-pod-info\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378805 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48cec592-3a36-46fc-813d-bf8fa5212e89-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378843 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-config-data\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378874 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378894 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.378991 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.380012 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-config-data\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.380283 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-server-conf\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.382485 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.384058 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.384366 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.384400 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3035ed83486c60d032b5a441bf8b23828742cdd52b6aade92b200a795655bf3e/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.391252 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.392926 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48cec592-3a36-46fc-813d-bf8fa5212e89-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.402048 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48cec592-3a36-46fc-813d-bf8fa5212e89-pod-info\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.404315 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kst6m\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-kube-api-access-kst6m\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.412026 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.415646 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.420401 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.420551 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.420628 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.420750 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.420936 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.421100 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tnj7p" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.421251 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.423449 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.427176 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.439036 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") pod \"rabbitmq-server-0\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.480471 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583217 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583273 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583303 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583361 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x94w7\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-kube-api-access-x94w7\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583382 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583420 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583434 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f98f3d3a-39ee-4b35-8653-ae334df58fca-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583454 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583483 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583507 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.583523 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f98f3d3a-39ee-4b35-8653-ae334df58fca-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708356 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708623 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708652 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708670 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f98f3d3a-39ee-4b35-8653-ae334df58fca-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708712 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708738 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708790 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708815 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x94w7\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-kube-api-access-x94w7\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708839 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708873 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.708891 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f98f3d3a-39ee-4b35-8653-ae334df58fca-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.710715 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.711604 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.713381 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.715365 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f98f3d3a-39ee-4b35-8653-ae334df58fca-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.715604 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f98f3d3a-39ee-4b35-8653-ae334df58fca-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.715746 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.715951 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.718533 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.718858 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.718882 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/728d4105af9584126f0fc1a781d59af315837d78da67022a7916c5e8477a32ea/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.719792 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.736096 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x94w7\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-kube-api-access-x94w7\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.759228 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") pod \"rabbitmq-cell1-server-0\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:18 crc kubenswrapper[4910]: I0226 22:15:18.771470 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.048379 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 22:15:19 crc kubenswrapper[4910]: W0226 22:15:19.054509 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48cec592_3a36_46fc_813d_bf8fa5212e89.slice/crio-e6a8b6c68847bb048f1665671ddd86908fcebba5217932bfbf05505ea91b7553 WatchSource:0}: Error finding container e6a8b6c68847bb048f1665671ddd86908fcebba5217932bfbf05505ea91b7553: Status 404 returned error can't find the container with id e6a8b6c68847bb048f1665671ddd86908fcebba5217932bfbf05505ea91b7553 Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.276029 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 22:15:19 crc kubenswrapper[4910]: W0226 22:15:19.306357 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf98f3d3a_39ee_4b35_8653_ae334df58fca.slice/crio-c3ee1d0ebff3456c208f3b1653cd7a56e0bc1523fb56dbe5ab2b978827841483 WatchSource:0}: Error finding container c3ee1d0ebff3456c208f3b1653cd7a56e0bc1523fb56dbe5ab2b978827841483: Status 404 returned error can't find the container with id c3ee1d0ebff3456c208f3b1653cd7a56e0bc1523fb56dbe5ab2b978827841483 Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.332203 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f98f3d3a-39ee-4b35-8653-ae334df58fca","Type":"ContainerStarted","Data":"c3ee1d0ebff3456c208f3b1653cd7a56e0bc1523fb56dbe5ab2b978827841483"} Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.341767 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"48cec592-3a36-46fc-813d-bf8fa5212e89","Type":"ContainerStarted","Data":"e6a8b6c68847bb048f1665671ddd86908fcebba5217932bfbf05505ea91b7553"} Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.496876 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.499829 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.502153 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.502477 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.508196 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.520287 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.526311 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-dct4h" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.552917 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.624635 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.624708 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6b1c449e-1cbb-4197-915a-6f1a213a340d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b1c449e-1cbb-4197-915a-6f1a213a340d\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.624749 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-config-data-default\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.624815 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.624951 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-kolla-config\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.624999 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.625022 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xtq5\" (UniqueName: \"kubernetes.io/projected/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-kube-api-access-2xtq5\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.625048 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.726833 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.726894 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-kolla-config\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.726935 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.726949 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xtq5\" (UniqueName: \"kubernetes.io/projected/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-kube-api-access-2xtq5\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.726968 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.727020 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.727045 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6b1c449e-1cbb-4197-915a-6f1a213a340d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b1c449e-1cbb-4197-915a-6f1a213a340d\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.727073 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-config-data-default\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.728337 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-config-data-default\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.728472 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-kolla-config\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.729222 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.730727 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.735450 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.735476 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6b1c449e-1cbb-4197-915a-6f1a213a340d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b1c449e-1cbb-4197-915a-6f1a213a340d\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ee6f39605bf18bce93804aba9246e5b33af1c31e2ed12bc5605d658bb5b8884/globalmount\"" pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.738458 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.741131 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.745930 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xtq5\" (UniqueName: \"kubernetes.io/projected/f3994e44-ac9f-4f93-97cf-9ad02cdc61e6-kube-api-access-2xtq5\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.799593 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6b1c449e-1cbb-4197-915a-6f1a213a340d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b1c449e-1cbb-4197-915a-6f1a213a340d\") pod \"openstack-galera-0\" (UID: \"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6\") " pod="openstack/openstack-galera-0" Feb 26 22:15:19 crc kubenswrapper[4910]: I0226 22:15:19.842760 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.295708 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 22:15:20 crc kubenswrapper[4910]: W0226 22:15:20.303281 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3994e44_ac9f_4f93_97cf_9ad02cdc61e6.slice/crio-382c0ff72aeaf039b06d79afef547890b238585fa8cfb4859b726972a1d47a3e WatchSource:0}: Error finding container 382c0ff72aeaf039b06d79afef547890b238585fa8cfb4859b726972a1d47a3e: Status 404 returned error can't find the container with id 382c0ff72aeaf039b06d79afef547890b238585fa8cfb4859b726972a1d47a3e Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.351575 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6","Type":"ContainerStarted","Data":"382c0ff72aeaf039b06d79afef547890b238585fa8cfb4859b726972a1d47a3e"} Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.858562 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.859991 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.864656 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.864884 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.865001 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.865037 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-lzkcb" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.872395 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.946307 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25f6d388-f925-4c92-9298-a454dd536aa6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.946360 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25f6d388-f925-4c92-9298-a454dd536aa6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.946383 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25f6d388-f925-4c92-9298-a454dd536aa6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.946417 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b90aecd-3c47-4f37-a3c2-55df6d0c7968\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b90aecd-3c47-4f37-a3c2-55df6d0c7968\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.946442 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25f6d388-f925-4c92-9298-a454dd536aa6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.946463 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25f6d388-f925-4c92-9298-a454dd536aa6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.946485 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9vtp\" (UniqueName: \"kubernetes.io/projected/25f6d388-f925-4c92-9298-a454dd536aa6-kube-api-access-n9vtp\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:20 crc kubenswrapper[4910]: I0226 22:15:20.946527 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25f6d388-f925-4c92-9298-a454dd536aa6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.048250 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9vtp\" (UniqueName: \"kubernetes.io/projected/25f6d388-f925-4c92-9298-a454dd536aa6-kube-api-access-n9vtp\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.048326 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25f6d388-f925-4c92-9298-a454dd536aa6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.048361 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25f6d388-f925-4c92-9298-a454dd536aa6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.048390 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25f6d388-f925-4c92-9298-a454dd536aa6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.048409 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25f6d388-f925-4c92-9298-a454dd536aa6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.048452 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b90aecd-3c47-4f37-a3c2-55df6d0c7968\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b90aecd-3c47-4f37-a3c2-55df6d0c7968\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.048476 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25f6d388-f925-4c92-9298-a454dd536aa6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.048497 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25f6d388-f925-4c92-9298-a454dd536aa6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.049252 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25f6d388-f925-4c92-9298-a454dd536aa6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.050999 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25f6d388-f925-4c92-9298-a454dd536aa6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.051626 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25f6d388-f925-4c92-9298-a454dd536aa6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.053603 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25f6d388-f925-4c92-9298-a454dd536aa6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.056802 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.056832 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b90aecd-3c47-4f37-a3c2-55df6d0c7968\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b90aecd-3c47-4f37-a3c2-55df6d0c7968\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0411545c807f3e81e6f296776c6c210e11da06c55fa0a621079f95cac4cb2181/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.066381 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25f6d388-f925-4c92-9298-a454dd536aa6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.067596 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25f6d388-f925-4c92-9298-a454dd536aa6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.069970 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9vtp\" (UniqueName: \"kubernetes.io/projected/25f6d388-f925-4c92-9298-a454dd536aa6-kube-api-access-n9vtp\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.083674 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b90aecd-3c47-4f37-a3c2-55df6d0c7968\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b90aecd-3c47-4f37-a3c2-55df6d0c7968\") pod \"openstack-cell1-galera-0\" (UID: \"25f6d388-f925-4c92-9298-a454dd536aa6\") " pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.176176 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.177083 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.184828 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lfx2q" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.186476 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.186634 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.197284 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.232246 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.251998 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e2f6ce8-eda1-4196-954a-3367ccc66e33-config-data\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.252136 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e2f6ce8-eda1-4196-954a-3367ccc66e33-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.252205 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tcmt\" (UniqueName: \"kubernetes.io/projected/9e2f6ce8-eda1-4196-954a-3367ccc66e33-kube-api-access-2tcmt\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.252296 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e2f6ce8-eda1-4196-954a-3367ccc66e33-kolla-config\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.252404 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2f6ce8-eda1-4196-954a-3367ccc66e33-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.353477 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2f6ce8-eda1-4196-954a-3367ccc66e33-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.354303 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e2f6ce8-eda1-4196-954a-3367ccc66e33-config-data\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.354432 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e2f6ce8-eda1-4196-954a-3367ccc66e33-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.354457 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tcmt\" (UniqueName: \"kubernetes.io/projected/9e2f6ce8-eda1-4196-954a-3367ccc66e33-kube-api-access-2tcmt\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.354529 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e2f6ce8-eda1-4196-954a-3367ccc66e33-kolla-config\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.355303 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e2f6ce8-eda1-4196-954a-3367ccc66e33-kolla-config\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.356957 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2f6ce8-eda1-4196-954a-3367ccc66e33-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.358276 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e2f6ce8-eda1-4196-954a-3367ccc66e33-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.370751 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e2f6ce8-eda1-4196-954a-3367ccc66e33-config-data\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.371576 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tcmt\" (UniqueName: \"kubernetes.io/projected/9e2f6ce8-eda1-4196-954a-3367ccc66e33-kube-api-access-2tcmt\") pod \"memcached-0\" (UID: \"9e2f6ce8-eda1-4196-954a-3367ccc66e33\") " pod="openstack/memcached-0" Feb 26 22:15:21 crc kubenswrapper[4910]: I0226 22:15:21.510753 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 22:15:23 crc kubenswrapper[4910]: I0226 22:15:23.248871 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 22:15:23 crc kubenswrapper[4910]: I0226 22:15:23.249806 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 22:15:23 crc kubenswrapper[4910]: I0226 22:15:23.259408 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-khnqk" Feb 26 22:15:23 crc kubenswrapper[4910]: I0226 22:15:23.287051 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jppcm\" (UniqueName: \"kubernetes.io/projected/5ef2ec31-3ae1-42ea-aaa5-5fec166df179-kube-api-access-jppcm\") pod \"kube-state-metrics-0\" (UID: \"5ef2ec31-3ae1-42ea-aaa5-5fec166df179\") " pod="openstack/kube-state-metrics-0" Feb 26 22:15:23 crc kubenswrapper[4910]: I0226 22:15:23.306691 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 22:15:23 crc kubenswrapper[4910]: I0226 22:15:23.390832 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jppcm\" (UniqueName: \"kubernetes.io/projected/5ef2ec31-3ae1-42ea-aaa5-5fec166df179-kube-api-access-jppcm\") pod \"kube-state-metrics-0\" (UID: \"5ef2ec31-3ae1-42ea-aaa5-5fec166df179\") " pod="openstack/kube-state-metrics-0" Feb 26 22:15:23 crc kubenswrapper[4910]: I0226 22:15:23.420938 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jppcm\" (UniqueName: \"kubernetes.io/projected/5ef2ec31-3ae1-42ea-aaa5-5fec166df179-kube-api-access-jppcm\") pod \"kube-state-metrics-0\" (UID: \"5ef2ec31-3ae1-42ea-aaa5-5fec166df179\") " pod="openstack/kube-state-metrics-0" Feb 26 22:15:23 crc kubenswrapper[4910]: I0226 22:15:23.566758 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.130747 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.132449 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.134383 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.134574 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.134596 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-6g97n" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.135713 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.136366 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.148071 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.202895 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/60fb0251-1bd0-4e06-a368-5aceb0afaa87-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.202953 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/60fb0251-1bd0-4e06-a368-5aceb0afaa87-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.202973 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/60fb0251-1bd0-4e06-a368-5aceb0afaa87-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.203183 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/60fb0251-1bd0-4e06-a368-5aceb0afaa87-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.203280 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/60fb0251-1bd0-4e06-a368-5aceb0afaa87-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.203383 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54747\" (UniqueName: \"kubernetes.io/projected/60fb0251-1bd0-4e06-a368-5aceb0afaa87-kube-api-access-54747\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.203511 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/60fb0251-1bd0-4e06-a368-5aceb0afaa87-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.304664 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54747\" (UniqueName: \"kubernetes.io/projected/60fb0251-1bd0-4e06-a368-5aceb0afaa87-kube-api-access-54747\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.304746 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/60fb0251-1bd0-4e06-a368-5aceb0afaa87-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.304782 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/60fb0251-1bd0-4e06-a368-5aceb0afaa87-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.304807 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/60fb0251-1bd0-4e06-a368-5aceb0afaa87-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.304823 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/60fb0251-1bd0-4e06-a368-5aceb0afaa87-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.304867 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/60fb0251-1bd0-4e06-a368-5aceb0afaa87-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.304885 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/60fb0251-1bd0-4e06-a368-5aceb0afaa87-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.305355 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/60fb0251-1bd0-4e06-a368-5aceb0afaa87-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.308445 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/60fb0251-1bd0-4e06-a368-5aceb0afaa87-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.309487 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/60fb0251-1bd0-4e06-a368-5aceb0afaa87-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.310612 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/60fb0251-1bd0-4e06-a368-5aceb0afaa87-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.311943 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/60fb0251-1bd0-4e06-a368-5aceb0afaa87-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.312709 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/60fb0251-1bd0-4e06-a368-5aceb0afaa87-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.331808 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54747\" (UniqueName: \"kubernetes.io/projected/60fb0251-1bd0-4e06-a368-5aceb0afaa87-kube-api-access-54747\") pod \"alertmanager-metric-storage-0\" (UID: \"60fb0251-1bd0-4e06-a368-5aceb0afaa87\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.451288 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.657332 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.659733 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.666373 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.666562 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.666696 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.666829 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.669228 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.669405 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.669463 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.669488 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-v6djc" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.694262 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.711015 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.711304 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.711391 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.711492 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22w4f\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-kube-api-access-22w4f\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.711618 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.711705 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.711820 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.711901 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f98425b-65de-48d2-be21-2c443218eacd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.711990 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-config\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.712060 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814041 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814107 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814130 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f98425b-65de-48d2-be21-2c443218eacd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814169 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-config\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814190 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814236 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814262 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814284 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814324 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22w4f\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-kube-api-access-22w4f\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.814385 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.815206 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.815813 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.816395 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.818397 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.818449 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b543beeb658b5e1c7e7ad53857c548a6d13c1d75fd19a37886a029b0ffecc6a4/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.818842 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.820042 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f98425b-65de-48d2-be21-2c443218eacd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.822188 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.824815 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.825536 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-config\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.832754 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22w4f\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-kube-api-access-22w4f\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.863128 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") pod \"prometheus-metric-storage-0\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:24 crc kubenswrapper[4910]: I0226 22:15:24.986230 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.773345 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.775091 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.777044 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.777400 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.777478 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-54k2v" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.777939 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.779080 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.798578 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.822857 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ddsmc"] Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.824289 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.829986 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6sh6r" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.830239 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.830458 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.844677 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6tz7l"] Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.853541 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.864707 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ddsmc"] Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.883486 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6tz7l"] Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970551 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17efae9f-593e-4c9f-8803-9090fff6e616-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970615 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-var-run\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970654 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpx76\" (UniqueName: \"kubernetes.io/projected/334061f5-f54a-41b2-8c49-66695cb3639a-kube-api-access-lpx76\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970686 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-scripts\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970711 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17efae9f-593e-4c9f-8803-9090fff6e616-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970744 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77n7x\" (UniqueName: \"kubernetes.io/projected/17efae9f-593e-4c9f-8803-9090fff6e616-kube-api-access-77n7x\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970765 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/334061f5-f54a-41b2-8c49-66695cb3639a-scripts\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970794 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqdn4\" (UniqueName: \"kubernetes.io/projected/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-kube-api-access-gqdn4\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970870 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-var-log\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970894 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17efae9f-593e-4c9f-8803-9090fff6e616-config\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970918 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/334061f5-f54a-41b2-8c49-66695cb3639a-var-run\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970938 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/334061f5-f54a-41b2-8c49-66695cb3639a-var-run-ovn\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970967 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17efae9f-593e-4c9f-8803-9090fff6e616-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.970993 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-var-lib\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.971029 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334061f5-f54a-41b2-8c49-66695cb3639a-combined-ca-bundle\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.971072 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/334061f5-f54a-41b2-8c49-66695cb3639a-ovn-controller-tls-certs\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.971096 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17efae9f-593e-4c9f-8803-9090fff6e616-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.971135 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-etc-ovs\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.971211 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ec29af9-8203-4306-b6ff-d1868a2eb919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ec29af9-8203-4306-b6ff-d1868a2eb919\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.971237 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17efae9f-593e-4c9f-8803-9090fff6e616-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:27 crc kubenswrapper[4910]: I0226 22:15:27.971260 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/334061f5-f54a-41b2-8c49-66695cb3639a-var-log-ovn\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072289 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpx76\" (UniqueName: \"kubernetes.io/projected/334061f5-f54a-41b2-8c49-66695cb3639a-kube-api-access-lpx76\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072349 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-scripts\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072380 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17efae9f-593e-4c9f-8803-9090fff6e616-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072427 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77n7x\" (UniqueName: \"kubernetes.io/projected/17efae9f-593e-4c9f-8803-9090fff6e616-kube-api-access-77n7x\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072449 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/334061f5-f54a-41b2-8c49-66695cb3639a-scripts\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072482 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqdn4\" (UniqueName: \"kubernetes.io/projected/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-kube-api-access-gqdn4\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072514 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-var-log\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072536 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17efae9f-593e-4c9f-8803-9090fff6e616-config\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072561 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/334061f5-f54a-41b2-8c49-66695cb3639a-var-run\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072585 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/334061f5-f54a-41b2-8c49-66695cb3639a-var-run-ovn\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072627 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17efae9f-593e-4c9f-8803-9090fff6e616-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072653 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-var-lib\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072673 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334061f5-f54a-41b2-8c49-66695cb3639a-combined-ca-bundle\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072696 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/334061f5-f54a-41b2-8c49-66695cb3639a-ovn-controller-tls-certs\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072717 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17efae9f-593e-4c9f-8803-9090fff6e616-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072751 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-etc-ovs\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072823 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7ec29af9-8203-4306-b6ff-d1868a2eb919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ec29af9-8203-4306-b6ff-d1868a2eb919\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072852 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17efae9f-593e-4c9f-8803-9090fff6e616-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072876 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/334061f5-f54a-41b2-8c49-66695cb3639a-var-log-ovn\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072961 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17efae9f-593e-4c9f-8803-9090fff6e616-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.072991 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-var-run\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.073468 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17efae9f-593e-4c9f-8803-9090fff6e616-config\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.073619 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-var-run\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.073637 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/334061f5-f54a-41b2-8c49-66695cb3639a-var-run\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.073732 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/334061f5-f54a-41b2-8c49-66695cb3639a-var-run-ovn\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.073822 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-etc-ovs\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.074938 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-var-log\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.075122 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/334061f5-f54a-41b2-8c49-66695cb3639a-var-log-ovn\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.075879 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17efae9f-593e-4c9f-8803-9090fff6e616-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.075950 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/334061f5-f54a-41b2-8c49-66695cb3639a-scripts\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.076195 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17efae9f-593e-4c9f-8803-9090fff6e616-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.076214 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-var-lib\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.076898 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-scripts\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.078511 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334061f5-f54a-41b2-8c49-66695cb3639a-combined-ca-bundle\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.081273 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17efae9f-593e-4c9f-8803-9090fff6e616-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.081558 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.081612 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7ec29af9-8203-4306-b6ff-d1868a2eb919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ec29af9-8203-4306-b6ff-d1868a2eb919\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/882c1c459d110589906beea5cdb5d927d21671ffdbb36a9c75dbf34014c9b0ca/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.088764 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/334061f5-f54a-41b2-8c49-66695cb3639a-ovn-controller-tls-certs\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.089257 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17efae9f-593e-4c9f-8803-9090fff6e616-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.089264 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17efae9f-593e-4c9f-8803-9090fff6e616-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.099125 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpx76\" (UniqueName: \"kubernetes.io/projected/334061f5-f54a-41b2-8c49-66695cb3639a-kube-api-access-lpx76\") pod \"ovn-controller-ddsmc\" (UID: \"334061f5-f54a-41b2-8c49-66695cb3639a\") " pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.101219 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqdn4\" (UniqueName: \"kubernetes.io/projected/3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed-kube-api-access-gqdn4\") pod \"ovn-controller-ovs-6tz7l\" (UID: \"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed\") " pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.101622 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77n7x\" (UniqueName: \"kubernetes.io/projected/17efae9f-593e-4c9f-8803-9090fff6e616-kube-api-access-77n7x\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.134042 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7ec29af9-8203-4306-b6ff-d1868a2eb919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ec29af9-8203-4306-b6ff-d1868a2eb919\") pod \"ovsdbserver-nb-0\" (UID: \"17efae9f-593e-4c9f-8803-9090fff6e616\") " pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.144807 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ddsmc" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.215325 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:15:28 crc kubenswrapper[4910]: I0226 22:15:28.399085 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.515072 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.518927 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.521382 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jrs2x" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.521449 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.523477 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.523535 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.528291 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.643943 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.644002 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.644064 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-807a79fa-3618-4326-8cc5-d29ee0214945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807a79fa-3618-4326-8cc5-d29ee0214945\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.644090 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.644193 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.644227 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.644269 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jptrt\" (UniqueName: \"kubernetes.io/projected/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-kube-api-access-jptrt\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.644306 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.745609 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.745667 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.745706 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jptrt\" (UniqueName: \"kubernetes.io/projected/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-kube-api-access-jptrt\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.745745 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.745784 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.745810 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.745867 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-807a79fa-3618-4326-8cc5-d29ee0214945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807a79fa-3618-4326-8cc5-d29ee0214945\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.745892 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.746689 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.747599 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.762517 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.769306 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.769352 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.769397 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-807a79fa-3618-4326-8cc5-d29ee0214945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807a79fa-3618-4326-8cc5-d29ee0214945\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1e097c428314c070ba51a3cb10927cf203213d00514d6dd3033de9cf7374ae61/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.773977 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.780854 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jptrt\" (UniqueName: \"kubernetes.io/projected/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-kube-api-access-jptrt\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.784754 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb62085e-02e8-4670-8ff4-dc1a7b242eb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.816812 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-807a79fa-3618-4326-8cc5-d29ee0214945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807a79fa-3618-4326-8cc5-d29ee0214945\") pod \"ovsdbserver-sb-0\" (UID: \"cb62085e-02e8-4670-8ff4-dc1a7b242eb8\") " pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:31 crc kubenswrapper[4910]: I0226 22:15:31.842248 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.154617 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.156012 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.161441 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.161491 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.161918 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.161940 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.161968 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-r97pz" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.162585 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.257015 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/993d51de-20a2-4cee-856c-f0cbb1b0307d-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.257080 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993d51de-20a2-4cee-856c-f0cbb1b0307d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.257107 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8wnn\" (UniqueName: \"kubernetes.io/projected/993d51de-20a2-4cee-856c-f0cbb1b0307d-kube-api-access-l8wnn\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.257150 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/993d51de-20a2-4cee-856c-f0cbb1b0307d-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.257237 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/993d51de-20a2-4cee-856c-f0cbb1b0307d-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.305509 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.306505 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.309236 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.309254 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.309378 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.324767 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.360910 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/993d51de-20a2-4cee-856c-f0cbb1b0307d-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.360954 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k5l2\" (UniqueName: \"kubernetes.io/projected/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-kube-api-access-5k5l2\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.361003 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993d51de-20a2-4cee-856c-f0cbb1b0307d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.361025 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8wnn\" (UniqueName: \"kubernetes.io/projected/993d51de-20a2-4cee-856c-f0cbb1b0307d-kube-api-access-l8wnn\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.361053 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.361083 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.361105 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/993d51de-20a2-4cee-856c-f0cbb1b0307d-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.361135 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/993d51de-20a2-4cee-856c-f0cbb1b0307d-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.361183 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.361214 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.361231 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.362641 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/993d51de-20a2-4cee-856c-f0cbb1b0307d-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.362660 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993d51de-20a2-4cee-856c-f0cbb1b0307d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.367509 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/993d51de-20a2-4cee-856c-f0cbb1b0307d-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.377110 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.378295 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.381388 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.381565 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.384781 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8wnn\" (UniqueName: \"kubernetes.io/projected/993d51de-20a2-4cee-856c-f0cbb1b0307d-kube-api-access-l8wnn\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.385674 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/993d51de-20a2-4cee-856c-f0cbb1b0307d-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zps74\" (UID: \"993d51de-20a2-4cee-856c-f0cbb1b0307d\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.413258 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.463139 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.463483 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbpp\" (UniqueName: \"kubernetes.io/projected/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-kube-api-access-lqbpp\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.463585 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.463673 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.463768 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.463894 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.463977 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.464267 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.464369 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k5l2\" (UniqueName: \"kubernetes.io/projected/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-kube-api-access-5k5l2\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.464466 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.464583 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.464851 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.464317 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.469960 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.478822 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.484111 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.484627 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.489608 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.495485 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-6ktpt" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.495707 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.495892 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.496004 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.496165 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.496186 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.496286 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.506929 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k5l2\" (UniqueName: \"kubernetes.io/projected/961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7-kube-api-access-5k5l2\") pod \"cloudkitty-lokistack-querier-58c84b5844-5lzsg\" (UID: \"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.521442 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.522748 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.532692 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.533845 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.567450 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqbpp\" (UniqueName: \"kubernetes.io/projected/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-kube-api-access-lqbpp\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.567542 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.567607 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.567644 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.567774 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.567888 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.567978 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.568058 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.568142 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.568287 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.568405 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.569238 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.569305 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhzg\" (UniqueName: \"kubernetes.io/projected/cb63b582-88ea-454f-96bc-c676e35dd7f7-kube-api-access-jmhzg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.569329 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.573661 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.577973 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.581831 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.586100 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.592476 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf"] Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.592992 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqbpp\" (UniqueName: \"kubernetes.io/projected/50fe0fae-cfef-4a5c-9b5c-0c09065c72ed-kube-api-access-lqbpp\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc\" (UID: \"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.633523 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.670810 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.670849 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnwr\" (UniqueName: \"kubernetes.io/projected/79645824-b55e-43e0-acc2-d0b64e9c7326-kube-api-access-4mnwr\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.670878 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.670896 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/79645824-b55e-43e0-acc2-d0b64e9c7326-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: E0226 22:15:32.670918 4910 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Feb 26 22:15:32 crc kubenswrapper[4910]: E0226 22:15:32.670972 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-tls-secret podName:cb63b582-88ea-454f-96bc-c676e35dd7f7 nodeName:}" failed. No retries permitted until 2026-02-26 22:15:33.170956489 +0000 UTC m=+1218.250447030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-tls-secret") pod "cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" (UID: "cb63b582-88ea-454f-96bc-c676e35dd7f7") : secret "cloudkitty-lokistack-gateway-http" not found Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.671033 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.671120 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.671195 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.671258 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.671292 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmhzg\" (UniqueName: \"kubernetes.io/projected/cb63b582-88ea-454f-96bc-c676e35dd7f7-kube-api-access-jmhzg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.671316 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.671942 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.672255 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.672269 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/79645824-b55e-43e0-acc2-d0b64e9c7326-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.672423 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.672459 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.672496 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.672518 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.672585 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.672647 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.672681 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.673003 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.673365 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.673682 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cb63b582-88ea-454f-96bc-c676e35dd7f7-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.675077 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.676658 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.702719 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmhzg\" (UniqueName: \"kubernetes.io/projected/cb63b582-88ea-454f-96bc-c676e35dd7f7-kube-api-access-jmhzg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.749743 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.773983 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mnwr\" (UniqueName: \"kubernetes.io/projected/79645824-b55e-43e0-acc2-d0b64e9c7326-kube-api-access-4mnwr\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.774032 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.774083 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/79645824-b55e-43e0-acc2-d0b64e9c7326-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.774108 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.774130 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.774742 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/79645824-b55e-43e0-acc2-d0b64e9c7326-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.774795 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.774815 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.774845 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.775064 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.776266 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.776585 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.776676 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.777212 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/79645824-b55e-43e0-acc2-d0b64e9c7326-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.778230 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/79645824-b55e-43e0-acc2-d0b64e9c7326-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.778849 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/79645824-b55e-43e0-acc2-d0b64e9c7326-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.780788 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/79645824-b55e-43e0-acc2-d0b64e9c7326-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.797150 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mnwr\" (UniqueName: \"kubernetes.io/projected/79645824-b55e-43e0-acc2-d0b64e9c7326-kube-api-access-4mnwr\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mpzzf\" (UID: \"79645824-b55e-43e0-acc2-d0b64e9c7326\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:32 crc kubenswrapper[4910]: I0226 22:15:32.856952 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.186079 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.190757 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cb63b582-88ea-454f-96bc-c676e35dd7f7-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-ljjvh\" (UID: \"cb63b582-88ea-454f-96bc-c676e35dd7f7\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.308487 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.310347 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.312376 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.312665 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.317190 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.371403 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.373521 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.383063 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.383326 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.397662 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.397714 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.397741 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.397764 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.397788 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.397823 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.397921 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvxj\" (UniqueName: \"kubernetes.io/projected/3e40b05d-8071-4f6b-b2ab-160931200e8a-kube-api-access-zxvxj\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.398009 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e40b05d-8071-4f6b-b2ab-160931200e8a-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.403664 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.430260 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.432548 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.435145 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.437343 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.437622 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.442417 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.499752 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxvxj\" (UniqueName: \"kubernetes.io/projected/3e40b05d-8071-4f6b-b2ab-160931200e8a-kube-api-access-zxvxj\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.499812 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.499842 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e40b05d-8071-4f6b-b2ab-160931200e8a-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.499881 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a65c3337-4a1d-4ae8-abf6-862e34f280cb-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.499915 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.499943 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.499972 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.499995 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500024 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500050 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500073 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500099 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500124 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frtmj\" (UniqueName: \"kubernetes.io/projected/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-kube-api-access-frtmj\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500151 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500261 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500292 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpn5p\" (UniqueName: \"kubernetes.io/projected/a65c3337-4a1d-4ae8-abf6-862e34f280cb-kube-api-access-dpn5p\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500317 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500342 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500388 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500441 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500512 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.500535 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.501911 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e40b05d-8071-4f6b-b2ab-160931200e8a-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.502012 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.502580 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.502795 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.504068 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.506745 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.515989 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/3e40b05d-8071-4f6b-b2ab-160931200e8a-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.517627 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxvxj\" (UniqueName: \"kubernetes.io/projected/3e40b05d-8071-4f6b-b2ab-160931200e8a-kube-api-access-zxvxj\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.526537 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.531203 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e40b05d-8071-4f6b-b2ab-160931200e8a\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620094 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620206 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620257 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620300 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620329 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a65c3337-4a1d-4ae8-abf6-862e34f280cb-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620358 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620373 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620400 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620426 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frtmj\" (UniqueName: \"kubernetes.io/projected/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-kube-api-access-frtmj\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620454 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620493 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpn5p\" (UniqueName: \"kubernetes.io/projected/a65c3337-4a1d-4ae8-abf6-862e34f280cb-kube-api-access-dpn5p\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620514 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620531 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620551 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.620720 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.626462 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.627112 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a65c3337-4a1d-4ae8-abf6-862e34f280cb-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.629190 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.631413 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.632084 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.632323 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.634929 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.644210 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.644861 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.652264 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.658719 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a65c3337-4a1d-4ae8-abf6-862e34f280cb-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.659256 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.666244 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frtmj\" (UniqueName: \"kubernetes.io/projected/b6be9fb7-b7f0-4dc9-9470-b9675918d1d1-kube-api-access-frtmj\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.679798 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.679934 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpn5p\" (UniqueName: \"kubernetes.io/projected/a65c3337-4a1d-4ae8-abf6-862e34f280cb-kube-api-access-dpn5p\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"a65c3337-4a1d-4ae8-abf6-862e34f280cb\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.706386 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.769528 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:33 crc kubenswrapper[4910]: I0226 22:15:33.997575 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:34 crc kubenswrapper[4910]: E0226 22:15:34.855639 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 26 22:15:34 crc kubenswrapper[4910]: E0226 22:15:34.856347 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x94w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(f98f3d3a-39ee-4b35-8653-ae334df58fca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:15:34 crc kubenswrapper[4910]: E0226 22:15:34.857590 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" Feb 26 22:15:35 crc kubenswrapper[4910]: E0226 22:15:35.492883 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" Feb 26 22:15:43 crc kubenswrapper[4910]: I0226 22:15:43.812821 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.224851 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.225063 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sb8cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-g79xb_openstack(90d460eb-2bb4-4c1b-a89d-04d1313985f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.226358 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-g79xb" podUID="90d460eb-2bb4-4c1b-a89d-04d1313985f4" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.249660 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.250115 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s6gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-tkzht_openstack(648c2ee7-6abc-404d-b181-c9f6047ef56e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.251244 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" podUID="648c2ee7-6abc-404d-b181-c9f6047ef56e" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.255700 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.255909 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9lgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-gjwx5_openstack(e1dd3507-57fc-4def-b3ef-41ce5a23f786): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.257867 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" podUID="e1dd3507-57fc-4def-b3ef-41ce5a23f786" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.280473 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.280904 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dltr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-bkf76_openstack(3feed117-9b62-4ea3-8c99-b588473c5042): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.282115 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" podUID="3feed117-9b62-4ea3-8c99-b588473c5042" Feb 26 22:15:44 crc kubenswrapper[4910]: I0226 22:15:44.587504 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"60fb0251-1bd0-4e06-a368-5aceb0afaa87","Type":"ContainerStarted","Data":"c8ce1e6910b073a4c80cf03b819c2bdbd225120e1803b2704081a9250b20d3cb"} Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.589213 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" podUID="e1dd3507-57fc-4def-b3ef-41ce5a23f786" Feb 26 22:15:44 crc kubenswrapper[4910]: E0226 22:15:44.594795 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-g79xb" podUID="90d460eb-2bb4-4c1b-a89d-04d1313985f4" Feb 26 22:15:44 crc kubenswrapper[4910]: I0226 22:15:44.802788 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 22:15:44 crc kubenswrapper[4910]: I0226 22:15:44.891306 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.015495 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf"] Feb 26 22:15:45 crc kubenswrapper[4910]: W0226 22:15:45.199656 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79645824_b55e_43e0_acc2_d0b64e9c7326.slice/crio-d2e1b777edc18a2629393dfd2b024bf136713a3c9d6e2e1065d9fdd13ed6bcc5 WatchSource:0}: Error finding container d2e1b777edc18a2629393dfd2b024bf136713a3c9d6e2e1065d9fdd13ed6bcc5: Status 404 returned error can't find the container with id d2e1b777edc18a2629393dfd2b024bf136713a3c9d6e2e1065d9fdd13ed6bcc5 Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.215927 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.237076 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.289439 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s6gz\" (UniqueName: \"kubernetes.io/projected/648c2ee7-6abc-404d-b181-c9f6047ef56e-kube-api-access-4s6gz\") pod \"648c2ee7-6abc-404d-b181-c9f6047ef56e\" (UID: \"648c2ee7-6abc-404d-b181-c9f6047ef56e\") " Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.289673 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/648c2ee7-6abc-404d-b181-c9f6047ef56e-config\") pod \"648c2ee7-6abc-404d-b181-c9f6047ef56e\" (UID: \"648c2ee7-6abc-404d-b181-c9f6047ef56e\") " Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.290818 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/648c2ee7-6abc-404d-b181-c9f6047ef56e-config" (OuterVolumeSpecName: "config") pod "648c2ee7-6abc-404d-b181-c9f6047ef56e" (UID: "648c2ee7-6abc-404d-b181-c9f6047ef56e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.391910 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dltr\" (UniqueName: \"kubernetes.io/projected/3feed117-9b62-4ea3-8c99-b588473c5042-kube-api-access-9dltr\") pod \"3feed117-9b62-4ea3-8c99-b588473c5042\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.391998 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-config\") pod \"3feed117-9b62-4ea3-8c99-b588473c5042\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.392193 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-dns-svc\") pod \"3feed117-9b62-4ea3-8c99-b588473c5042\" (UID: \"3feed117-9b62-4ea3-8c99-b588473c5042\") " Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.392561 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/648c2ee7-6abc-404d-b181-c9f6047ef56e-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.392977 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3feed117-9b62-4ea3-8c99-b588473c5042" (UID: "3feed117-9b62-4ea3-8c99-b588473c5042"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.395101 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-config" (OuterVolumeSpecName: "config") pod "3feed117-9b62-4ea3-8c99-b588473c5042" (UID: "3feed117-9b62-4ea3-8c99-b588473c5042"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.416344 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3feed117-9b62-4ea3-8c99-b588473c5042-kube-api-access-9dltr" (OuterVolumeSpecName: "kube-api-access-9dltr") pod "3feed117-9b62-4ea3-8c99-b588473c5042" (UID: "3feed117-9b62-4ea3-8c99-b588473c5042"). InnerVolumeSpecName "kube-api-access-9dltr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.416426 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.416437 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648c2ee7-6abc-404d-b181-c9f6047ef56e-kube-api-access-4s6gz" (OuterVolumeSpecName: "kube-api-access-4s6gz") pod "648c2ee7-6abc-404d-b181-c9f6047ef56e" (UID: "648c2ee7-6abc-404d-b181-c9f6047ef56e"). InnerVolumeSpecName "kube-api-access-4s6gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.423473 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.436710 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.497116 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.499198 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s6gz\" (UniqueName: \"kubernetes.io/projected/648c2ee7-6abc-404d-b181-c9f6047ef56e-kube-api-access-4s6gz\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.499219 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.499231 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dltr\" (UniqueName: \"kubernetes.io/projected/3feed117-9b62-4ea3-8c99-b588473c5042-kube-api-access-9dltr\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.499239 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3feed117-9b62-4ea3-8c99-b588473c5042-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.509485 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.529542 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc"] Feb 26 22:15:45 crc kubenswrapper[4910]: W0226 22:15:45.549583 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6be9fb7_b7f0_4dc9_9470_b9675918d1d1.slice/crio-5d2c00fd45c1d0a97c245e3eac92ba63298f28a8454840d1926f5dab96d4fff9 WatchSource:0}: Error finding container 5d2c00fd45c1d0a97c245e3eac92ba63298f28a8454840d1926f5dab96d4fff9: Status 404 returned error can't find the container with id 5d2c00fd45c1d0a97c245e3eac92ba63298f28a8454840d1926f5dab96d4fff9 Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.556413 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.577074 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 22:15:45 crc kubenswrapper[4910]: W0226 22:15:45.587543 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod993d51de_20a2_4cee_856c_f0cbb1b0307d.slice/crio-40d8a5e913deeb6b18d1c0892d27ff2eef72f80ef3bfe161fb606da8a389343e WatchSource:0}: Error finding container 40d8a5e913deeb6b18d1c0892d27ff2eef72f80ef3bfe161fb606da8a389343e: Status 404 returned error can't find the container with id 40d8a5e913deeb6b18d1c0892d27ff2eef72f80ef3bfe161fb606da8a389343e Feb 26 22:15:45 crc kubenswrapper[4910]: W0226 22:15:45.588675 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e40b05d_8071_4f6b_b2ab_160931200e8a.slice/crio-e800af0d9fbfcbc56d8f6707ab0b551ba732952ca97c00eb87afc4c71f214957 WatchSource:0}: Error finding container e800af0d9fbfcbc56d8f6707ab0b551ba732952ca97c00eb87afc4c71f214957: Status 404 returned error can't find the container with id e800af0d9fbfcbc56d8f6707ab0b551ba732952ca97c00eb87afc4c71f214957 Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.590139 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74"] Feb 26 22:15:45 crc kubenswrapper[4910]: W0226 22:15:45.590292 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda65c3337_4a1d_4ae8_abf6_862e34f280cb.slice/crio-841bf0f19f2bf78aedd62ee024317908a39c11d3c2fe67bfc5d8b869ef23a1ac WatchSource:0}: Error finding container 841bf0f19f2bf78aedd62ee024317908a39c11d3c2fe67bfc5d8b869ef23a1ac: Status 404 returned error can't find the container with id 841bf0f19f2bf78aedd62ee024317908a39c11d3c2fe67bfc5d8b869ef23a1ac Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.599074 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.611545 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25f6d388-f925-4c92-9298-a454dd536aa6","Type":"ContainerStarted","Data":"41b131529ffa064f1de0159ae63586f073e285650f2281f8c99e4ab1b421f943"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.615892 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.620258 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1","Type":"ContainerStarted","Data":"5d2c00fd45c1d0a97c245e3eac92ba63298f28a8454840d1926f5dab96d4fff9"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.626377 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" event={"ID":"648c2ee7-6abc-404d-b181-c9f6047ef56e","Type":"ContainerDied","Data":"797dd7276a85540ba1df5ee760138064484c0005a2fce36f50bdc25ca4c2e883"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.626447 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tkzht" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.631383 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ddsmc"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.632098 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5ef2ec31-3ae1-42ea-aaa5-5fec166df179","Type":"ContainerStarted","Data":"fe54533c4c9725881f318acab1d14215d07cc52afd3806d1e042d8fa5fd6b4d1"} Feb 26 22:15:45 crc kubenswrapper[4910]: E0226 22:15:45.643723 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n69h5dhcdh5dbh686hbdh97h5f8h675h95h5cbh649hb8h545hdbh67ch65ch56hb8h8h6dh5c4h587h57bh9bh556hffh78h57h584hc5hbcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpx76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ddsmc_openstack(334061f5-f54a-41b2-8c49-66695cb3639a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.643886 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6","Type":"ContainerStarted","Data":"d9c89c8476cd9abc686b6e2eb6f171a0cdf092096734489d9e2f0b0a05a19741"} Feb 26 22:15:45 crc kubenswrapper[4910]: E0226 22:15:45.644870 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/ovn-controller-ddsmc" podUID="334061f5-f54a-41b2-8c49-66695cb3639a" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.645143 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6tz7l"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.647677 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerStarted","Data":"19555b113776243ace5d71e3f3e5aff3a23663e74fba2032689dbc7a10d92838"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.657260 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"17efae9f-593e-4c9f-8803-9090fff6e616","Type":"ContainerStarted","Data":"c57d03917716b8762ae50544ece7107f0968465776d573e16ca95ba70be07ec4"} Feb 26 22:15:45 crc kubenswrapper[4910]: E0226 22:15:45.659001 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n69h5dhcdh5dbh686hbdh97h5f8h675h95h5cbh649hb8h545hdbh67ch65ch56hb8h8h6dh5c4h587h57bh9bh556hffh78h57h584hc5hbcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqdn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-6tz7l_openstack(3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.669722 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" event={"ID":"cb63b582-88ea-454f-96bc-c676e35dd7f7","Type":"ContainerStarted","Data":"4dd6b290b3c4185f38dc6dad414b917ab5a2d5939e163be2077f01cefe2fb066"} Feb 26 22:15:45 crc kubenswrapper[4910]: E0226 22:15:45.669589 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/ovn-controller-ovs-6tz7l" podUID="3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.674331 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9e2f6ce8-eda1-4196-954a-3367ccc66e33","Type":"ContainerStarted","Data":"98ffc4c1b735f2c72b633f4da993ba9dc4d03b8355af08c360c905e9a0e689b3"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.676142 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" event={"ID":"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed","Type":"ContainerStarted","Data":"a90205b653d9d3bbeb018030b610245f1c2e699b0e609d009437db37ed2501c1"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.677510 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cb62085e-02e8-4670-8ff4-dc1a7b242eb8","Type":"ContainerStarted","Data":"fb6623db18ad0155a1fe5fbbfb2f7a8860f0826f0b88b4719726423215afe4fa"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.678786 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" event={"ID":"3feed117-9b62-4ea3-8c99-b588473c5042","Type":"ContainerDied","Data":"3fe1ce8e35fa13cd88a78e15be8ec9ddc07e8a0f21f6bbb34ed850de9be514ac"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.678857 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bkf76" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.681583 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" event={"ID":"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7","Type":"ContainerStarted","Data":"bd55391e161f3c64c2db403b31c7ad2cd24275feed82fe566f28126bbef304c7"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.682830 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" event={"ID":"79645824-b55e-43e0-acc2-d0b64e9c7326","Type":"ContainerStarted","Data":"d2e1b777edc18a2629393dfd2b024bf136713a3c9d6e2e1065d9fdd13ed6bcc5"} Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.749547 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bkf76"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.774961 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bkf76"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.798309 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkzht"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.810484 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tkzht"] Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.921804 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3feed117-9b62-4ea3-8c99-b588473c5042" path="/var/lib/kubelet/pods/3feed117-9b62-4ea3-8c99-b588473c5042/volumes" Feb 26 22:15:45 crc kubenswrapper[4910]: I0226 22:15:45.922173 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="648c2ee7-6abc-404d-b181-c9f6047ef56e" path="/var/lib/kubelet/pods/648c2ee7-6abc-404d-b181-c9f6047ef56e/volumes" Feb 26 22:15:46 crc kubenswrapper[4910]: I0226 22:15:46.694044 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ddsmc" event={"ID":"334061f5-f54a-41b2-8c49-66695cb3639a","Type":"ContainerStarted","Data":"6e0530095c7ee47592623fbb8d5be5c1f8bf04a348d61c87e8700e81b7aba4d6"} Feb 26 22:15:46 crc kubenswrapper[4910]: E0226 22:15:46.697841 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-ddsmc" podUID="334061f5-f54a-41b2-8c49-66695cb3639a" Feb 26 22:15:46 crc kubenswrapper[4910]: I0226 22:15:46.698187 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" event={"ID":"993d51de-20a2-4cee-856c-f0cbb1b0307d","Type":"ContainerStarted","Data":"40d8a5e913deeb6b18d1c0892d27ff2eef72f80ef3bfe161fb606da8a389343e"} Feb 26 22:15:46 crc kubenswrapper[4910]: I0226 22:15:46.699805 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"3e40b05d-8071-4f6b-b2ab-160931200e8a","Type":"ContainerStarted","Data":"e800af0d9fbfcbc56d8f6707ab0b551ba732952ca97c00eb87afc4c71f214957"} Feb 26 22:15:46 crc kubenswrapper[4910]: I0226 22:15:46.702134 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"48cec592-3a36-46fc-813d-bf8fa5212e89","Type":"ContainerStarted","Data":"d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70"} Feb 26 22:15:46 crc kubenswrapper[4910]: I0226 22:15:46.704888 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25f6d388-f925-4c92-9298-a454dd536aa6","Type":"ContainerStarted","Data":"a14b162e1597cd4570f6e7dced42186a0ec7790c9e5dd5bf633a20138e043c4f"} Feb 26 22:15:46 crc kubenswrapper[4910]: I0226 22:15:46.707174 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"a65c3337-4a1d-4ae8-abf6-862e34f280cb","Type":"ContainerStarted","Data":"841bf0f19f2bf78aedd62ee024317908a39c11d3c2fe67bfc5d8b869ef23a1ac"} Feb 26 22:15:46 crc kubenswrapper[4910]: I0226 22:15:46.709043 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6tz7l" event={"ID":"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed","Type":"ContainerStarted","Data":"da6c2feef650f205e3fc8e856c40a9f642426a73e6b6ee1b75140644746c5e6a"} Feb 26 22:15:46 crc kubenswrapper[4910]: E0226 22:15:46.711086 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-6tz7l" podUID="3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed" Feb 26 22:15:47 crc kubenswrapper[4910]: E0226 22:15:47.721094 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-ddsmc" podUID="334061f5-f54a-41b2-8c49-66695cb3639a" Feb 26 22:15:47 crc kubenswrapper[4910]: E0226 22:15:47.721139 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-6tz7l" podUID="3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed" Feb 26 22:15:48 crc kubenswrapper[4910]: I0226 22:15:48.728015 4910 generic.go:334] "Generic (PLEG): container finished" podID="f3994e44-ac9f-4f93-97cf-9ad02cdc61e6" containerID="d9c89c8476cd9abc686b6e2eb6f171a0cdf092096734489d9e2f0b0a05a19741" exitCode=0 Feb 26 22:15:48 crc kubenswrapper[4910]: I0226 22:15:48.728055 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6","Type":"ContainerDied","Data":"d9c89c8476cd9abc686b6e2eb6f171a0cdf092096734489d9e2f0b0a05a19741"} Feb 26 22:15:49 crc kubenswrapper[4910]: I0226 22:15:49.739440 4910 generic.go:334] "Generic (PLEG): container finished" podID="25f6d388-f925-4c92-9298-a454dd536aa6" containerID="a14b162e1597cd4570f6e7dced42186a0ec7790c9e5dd5bf633a20138e043c4f" exitCode=0 Feb 26 22:15:49 crc kubenswrapper[4910]: I0226 22:15:49.739513 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25f6d388-f925-4c92-9298-a454dd536aa6","Type":"ContainerDied","Data":"a14b162e1597cd4570f6e7dced42186a0ec7790c9e5dd5bf633a20138e043c4f"} Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.391565 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hr48l"] Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.393126 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.397422 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.406047 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hr48l"] Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.414853 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d24pg\" (UniqueName: \"kubernetes.io/projected/1fb71366-0580-4adf-91c1-d0177014808b-kube-api-access-d24pg\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.415036 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb71366-0580-4adf-91c1-d0177014808b-combined-ca-bundle\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.415090 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fb71366-0580-4adf-91c1-d0177014808b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.415118 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1fb71366-0580-4adf-91c1-d0177014808b-ovn-rundir\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.415808 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb71366-0580-4adf-91c1-d0177014808b-config\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.416061 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1fb71366-0580-4adf-91c1-d0177014808b-ovs-rundir\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.519780 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1fb71366-0580-4adf-91c1-d0177014808b-ovs-rundir\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.520191 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d24pg\" (UniqueName: \"kubernetes.io/projected/1fb71366-0580-4adf-91c1-d0177014808b-kube-api-access-d24pg\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.520226 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb71366-0580-4adf-91c1-d0177014808b-combined-ca-bundle\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.520251 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fb71366-0580-4adf-91c1-d0177014808b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.520280 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1fb71366-0580-4adf-91c1-d0177014808b-ovn-rundir\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.520299 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1fb71366-0580-4adf-91c1-d0177014808b-ovs-rundir\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.520454 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb71366-0580-4adf-91c1-d0177014808b-config\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.521531 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb71366-0580-4adf-91c1-d0177014808b-config\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.521556 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1fb71366-0580-4adf-91c1-d0177014808b-ovn-rundir\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.526319 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fb71366-0580-4adf-91c1-d0177014808b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.526913 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb71366-0580-4adf-91c1-d0177014808b-combined-ca-bundle\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.537369 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d24pg\" (UniqueName: \"kubernetes.io/projected/1fb71366-0580-4adf-91c1-d0177014808b-kube-api-access-d24pg\") pod \"ovn-controller-metrics-hr48l\" (UID: \"1fb71366-0580-4adf-91c1-d0177014808b\") " pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.558030 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g79xb"] Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.598825 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxtrr"] Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.601657 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.607065 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.628792 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-config\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.649990 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.651479 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.651774 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj6qr\" (UniqueName: \"kubernetes.io/projected/0ede30cd-328a-42f0-94e2-951689df8a77-kube-api-access-vj6qr\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.679255 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxtrr"] Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.730323 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hr48l" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.753124 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.753191 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.753240 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj6qr\" (UniqueName: \"kubernetes.io/projected/0ede30cd-328a-42f0-94e2-951689df8a77-kube-api-access-vj6qr\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.753313 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-config\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.754114 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-config\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.754605 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.755068 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.913051 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj6qr\" (UniqueName: \"kubernetes.io/projected/0ede30cd-328a-42f0-94e2-951689df8a77-kube-api-access-vj6qr\") pod \"dnsmasq-dns-5bf47b49b7-mxtrr\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.959044 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.968636 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gjwx5"] Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.968880 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-pqvdk"] Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.972812 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pqvdk"] Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.975449 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:51 crc kubenswrapper[4910]: I0226 22:15:51.980888 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.061915 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-config\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.061976 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-dns-svc\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.061997 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.062031 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.062051 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj49p\" (UniqueName: \"kubernetes.io/projected/564095ae-5c59-4307-8c7e-87c20aed1b59-kube-api-access-lj49p\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.164118 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-dns-svc\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.164193 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.164236 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.164253 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj49p\" (UniqueName: \"kubernetes.io/projected/564095ae-5c59-4307-8c7e-87c20aed1b59-kube-api-access-lj49p\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.164371 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-config\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.165214 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-config\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.165647 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.165727 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.166439 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-dns-svc\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.189982 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj49p\" (UniqueName: \"kubernetes.io/projected/564095ae-5c59-4307-8c7e-87c20aed1b59-kube-api-access-lj49p\") pod \"dnsmasq-dns-8554648995-pqvdk\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.313214 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.448973 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.457828 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.572573 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-dns-svc\") pod \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.572906 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-dns-svc\") pod \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.572956 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-config\") pod \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.573049 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lgb\" (UniqueName: \"kubernetes.io/projected/e1dd3507-57fc-4def-b3ef-41ce5a23f786-kube-api-access-k9lgb\") pod \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.573048 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1dd3507-57fc-4def-b3ef-41ce5a23f786" (UID: "e1dd3507-57fc-4def-b3ef-41ce5a23f786"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.573115 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-config\") pod \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\" (UID: \"e1dd3507-57fc-4def-b3ef-41ce5a23f786\") " Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.573153 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb8cv\" (UniqueName: \"kubernetes.io/projected/90d460eb-2bb4-4c1b-a89d-04d1313985f4-kube-api-access-sb8cv\") pod \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\" (UID: \"90d460eb-2bb4-4c1b-a89d-04d1313985f4\") " Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.573504 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.573776 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-config" (OuterVolumeSpecName: "config") pod "90d460eb-2bb4-4c1b-a89d-04d1313985f4" (UID: "90d460eb-2bb4-4c1b-a89d-04d1313985f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.573781 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-config" (OuterVolumeSpecName: "config") pod "e1dd3507-57fc-4def-b3ef-41ce5a23f786" (UID: "e1dd3507-57fc-4def-b3ef-41ce5a23f786"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.574116 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90d460eb-2bb4-4c1b-a89d-04d1313985f4" (UID: "90d460eb-2bb4-4c1b-a89d-04d1313985f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.576595 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1dd3507-57fc-4def-b3ef-41ce5a23f786-kube-api-access-k9lgb" (OuterVolumeSpecName: "kube-api-access-k9lgb") pod "e1dd3507-57fc-4def-b3ef-41ce5a23f786" (UID: "e1dd3507-57fc-4def-b3ef-41ce5a23f786"). InnerVolumeSpecName "kube-api-access-k9lgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.577108 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d460eb-2bb4-4c1b-a89d-04d1313985f4-kube-api-access-sb8cv" (OuterVolumeSpecName: "kube-api-access-sb8cv") pod "90d460eb-2bb4-4c1b-a89d-04d1313985f4" (UID: "90d460eb-2bb4-4c1b-a89d-04d1313985f4"). InnerVolumeSpecName "kube-api-access-sb8cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.674985 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lgb\" (UniqueName: \"kubernetes.io/projected/e1dd3507-57fc-4def-b3ef-41ce5a23f786-kube-api-access-k9lgb\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.675009 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd3507-57fc-4def-b3ef-41ce5a23f786-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.675018 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb8cv\" (UniqueName: \"kubernetes.io/projected/90d460eb-2bb4-4c1b-a89d-04d1313985f4-kube-api-access-sb8cv\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.675026 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.675036 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d460eb-2bb4-4c1b-a89d-04d1313985f4-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.826769 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-g79xb" event={"ID":"90d460eb-2bb4-4c1b-a89d-04d1313985f4","Type":"ContainerDied","Data":"4089d4b3aa04fff80383851f04ee33574f15a53b27e7adf089b309b273756e55"} Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.826867 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-g79xb" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.831777 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" event={"ID":"e1dd3507-57fc-4def-b3ef-41ce5a23f786","Type":"ContainerDied","Data":"24d88a76c0d22a6e593c562cab1c14ca2d1ebb155b3601c0d135653b00b90bf9"} Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.831863 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gjwx5" Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.836709 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f98f3d3a-39ee-4b35-8653-ae334df58fca","Type":"ContainerStarted","Data":"b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f"} Feb 26 22:15:52 crc kubenswrapper[4910]: I0226 22:15:52.994539 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gjwx5"] Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.036961 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gjwx5"] Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.095221 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g79xb"] Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.107485 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g79xb"] Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.121557 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hr48l"] Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.245066 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pqvdk"] Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.278956 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxtrr"] Feb 26 22:15:53 crc kubenswrapper[4910]: W0226 22:15:53.414847 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod564095ae_5c59_4307_8c7e_87c20aed1b59.slice/crio-f59bc4a3572254378d1dfdf42e09999b82c5120c286e1e7604ba290e5b142c09 WatchSource:0}: Error finding container f59bc4a3572254378d1dfdf42e09999b82c5120c286e1e7604ba290e5b142c09: Status 404 returned error can't find the container with id f59bc4a3572254378d1dfdf42e09999b82c5120c286e1e7604ba290e5b142c09 Feb 26 22:15:53 crc kubenswrapper[4910]: W0226 22:15:53.494098 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ede30cd_328a_42f0_94e2_951689df8a77.slice/crio-ff8879c0c5273e16512ffd57e5e70068a196cbb40f5b080fb85d1da50ea8ee9f WatchSource:0}: Error finding container ff8879c0c5273e16512ffd57e5e70068a196cbb40f5b080fb85d1da50ea8ee9f: Status 404 returned error can't find the container with id ff8879c0c5273e16512ffd57e5e70068a196cbb40f5b080fb85d1da50ea8ee9f Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.854203 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9e2f6ce8-eda1-4196-954a-3367ccc66e33","Type":"ContainerStarted","Data":"a74c0174bfc15dc49f55cee5c288bd8729852696c98749cf32f4c006fa81267b"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.854353 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.858023 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" event={"ID":"79645824-b55e-43e0-acc2-d0b64e9c7326","Type":"ContainerStarted","Data":"d79c43274e50117a79169c5d92905b37d3210a445da763cfda13a3f62e1ff36e"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.858264 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.868730 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5ef2ec31-3ae1-42ea-aaa5-5fec166df179","Type":"ContainerStarted","Data":"5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.872305 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.876758 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pqvdk" event={"ID":"564095ae-5c59-4307-8c7e-87c20aed1b59","Type":"ContainerStarted","Data":"f59bc4a3572254378d1dfdf42e09999b82c5120c286e1e7604ba290e5b142c09"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.878445 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.454788418 podStartE2EDuration="32.878373487s" podCreationTimestamp="2026-02-26 22:15:21 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.510307313 +0000 UTC m=+1230.589797854" lastFinishedPulling="2026-02-26 22:15:50.933892382 +0000 UTC m=+1236.013382923" observedRunningTime="2026-02-26 22:15:53.87307329 +0000 UTC m=+1238.952563831" watchObservedRunningTime="2026-02-26 22:15:53.878373487 +0000 UTC m=+1238.957864018" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.886724 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f3994e44-ac9f-4f93-97cf-9ad02cdc61e6","Type":"ContainerStarted","Data":"05ecd1cf6fbf110287ac96ee0c24f0485def7f6d21ac6be540a0254a07f2090d"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.891695 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.901257 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.786410931 podStartE2EDuration="30.901239174s" podCreationTimestamp="2026-02-26 22:15:23 +0000 UTC" firstStartedPulling="2026-02-26 22:15:44.811315806 +0000 UTC m=+1229.890806347" lastFinishedPulling="2026-02-26 22:15:52.926144049 +0000 UTC m=+1238.005634590" observedRunningTime="2026-02-26 22:15:53.894183878 +0000 UTC m=+1238.973674429" watchObservedRunningTime="2026-02-26 22:15:53.901239174 +0000 UTC m=+1238.980729715" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.914467 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d460eb-2bb4-4c1b-a89d-04d1313985f4" path="/var/lib/kubelet/pods/90d460eb-2bb4-4c1b-a89d-04d1313985f4/volumes" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.914891 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1dd3507-57fc-4def-b3ef-41ce5a23f786" path="/var/lib/kubelet/pods/e1dd3507-57fc-4def-b3ef-41ce5a23f786/volumes" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.915324 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.915362 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" event={"ID":"50fe0fae-cfef-4a5c-9b5c-0c09065c72ed","Type":"ContainerStarted","Data":"8f13dc54b5580782a70b47a0e895ade4f9045c768ec1223b94d817c2c840d999"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.915380 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25f6d388-f925-4c92-9298-a454dd536aa6","Type":"ContainerStarted","Data":"ecb4b5abd3e87ab79b91ec3828897923043c9d064760e5eda3b2a46cf89d7408"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.923013 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mpzzf" podStartSLOduration=16.014747425 podStartE2EDuration="21.922992169s" podCreationTimestamp="2026-02-26 22:15:32 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.209110827 +0000 UTC m=+1230.288601358" lastFinishedPulling="2026-02-26 22:15:51.117355561 +0000 UTC m=+1236.196846102" observedRunningTime="2026-02-26 22:15:53.909083392 +0000 UTC m=+1238.988573953" watchObservedRunningTime="2026-02-26 22:15:53.922992169 +0000 UTC m=+1239.002482710" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.923399 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hr48l" event={"ID":"1fb71366-0580-4adf-91c1-d0177014808b","Type":"ContainerStarted","Data":"606358cffb3cc70a7f59661e6ef7b011d95463de77ed7bbe8070f179bdfd781e"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.937697 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cb62085e-02e8-4670-8ff4-dc1a7b242eb8","Type":"ContainerStarted","Data":"1d9fdcd5e36fd3518a770c6defa1fa6f3f9f13fd75e9571be54701f2e17354d3"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.949961 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" event={"ID":"961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7","Type":"ContainerStarted","Data":"d50a40b3b1cfbf663b9bb04693c2ec6ad31ef683218112341cdbe1b6f74e0813"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.950027 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.954305 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" podStartSLOduration=15.058216562 podStartE2EDuration="21.954283171s" podCreationTimestamp="2026-02-26 22:15:32 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.543153292 +0000 UTC m=+1230.622643833" lastFinishedPulling="2026-02-26 22:15:52.439219901 +0000 UTC m=+1237.518710442" observedRunningTime="2026-02-26 22:15:53.942601945 +0000 UTC m=+1239.022092496" watchObservedRunningTime="2026-02-26 22:15:53.954283171 +0000 UTC m=+1239.033773712" Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.958466 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"17efae9f-593e-4c9f-8803-9090fff6e616","Type":"ContainerStarted","Data":"95ab6c6f386641e22d358c10e90fcd1625af94480544bae9afba0991ec3ab28e"} Feb 26 22:15:53 crc kubenswrapper[4910]: I0226 22:15:53.960009 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" event={"ID":"0ede30cd-328a-42f0-94e2-951689df8a77","Type":"ContainerStarted","Data":"ff8879c0c5273e16512ffd57e5e70068a196cbb40f5b080fb85d1da50ea8ee9f"} Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:53.981308 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.923971116 podStartE2EDuration="35.981289153s" podCreationTimestamp="2026-02-26 22:15:18 +0000 UTC" firstStartedPulling="2026-02-26 22:15:20.307555916 +0000 UTC m=+1205.387046457" lastFinishedPulling="2026-02-26 22:15:44.364873953 +0000 UTC m=+1229.444364494" observedRunningTime="2026-02-26 22:15:53.976994183 +0000 UTC m=+1239.056484714" watchObservedRunningTime="2026-02-26 22:15:53.981289153 +0000 UTC m=+1239.060779694" Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:54.070292 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" podStartSLOduration=15.148771599 podStartE2EDuration="22.070253491s" podCreationTimestamp="2026-02-26 22:15:32 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.513513031 +0000 UTC m=+1230.593003572" lastFinishedPulling="2026-02-26 22:15:52.434994923 +0000 UTC m=+1237.514485464" observedRunningTime="2026-02-26 22:15:54.053540555 +0000 UTC m=+1239.133031116" watchObservedRunningTime="2026-02-26 22:15:54.070253491 +0000 UTC m=+1239.149744052" Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:54.090784 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.090763701 podStartE2EDuration="35.090763701s" podCreationTimestamp="2026-02-26 22:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:15:54.073411998 +0000 UTC m=+1239.152902549" watchObservedRunningTime="2026-02-26 22:15:54.090763701 +0000 UTC m=+1239.170254242" Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:54.983232 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" event={"ID":"cb63b582-88ea-454f-96bc-c676e35dd7f7","Type":"ContainerStarted","Data":"cbccea5396181c23e072fdb2884d3061b113b5e62abadb77d3062975be697ebc"} Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:54.986834 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"3e40b05d-8071-4f6b-b2ab-160931200e8a","Type":"ContainerStarted","Data":"e1243392d929608610aebc5950c0e541a224905862f6e68fb0fb5aae2ec3659d"} Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:54.987708 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:54.991104 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"b6be9fb7-b7f0-4dc9-9470-b9675918d1d1","Type":"ContainerStarted","Data":"df4e8fee203f1cf6afde82f52b1d4762607c96a5f6c0335200bc197a7840f356"} Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:54.992411 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:54.998953 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"a65c3337-4a1d-4ae8-abf6-862e34f280cb","Type":"ContainerStarted","Data":"596c0d44bbc7d7605d752b4dda51e00491eb34db5f8db3f33b0e0590a0154701"} Feb 26 22:15:54 crc kubenswrapper[4910]: I0226 22:15:54.999042 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:15:55 crc kubenswrapper[4910]: I0226 22:15:55.027316 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" podStartSLOduration=16.141376546 podStartE2EDuration="23.027286861s" podCreationTimestamp="2026-02-26 22:15:32 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.543533933 +0000 UTC m=+1230.623024474" lastFinishedPulling="2026-02-26 22:15:52.429444248 +0000 UTC m=+1237.508934789" observedRunningTime="2026-02-26 22:15:55.017732285 +0000 UTC m=+1240.097222836" watchObservedRunningTime="2026-02-26 22:15:55.027286861 +0000 UTC m=+1240.106777402" Feb 26 22:15:55 crc kubenswrapper[4910]: I0226 22:15:55.050273 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=16.179361232 podStartE2EDuration="23.05025012s" podCreationTimestamp="2026-02-26 22:15:32 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.567962582 +0000 UTC m=+1230.647453123" lastFinishedPulling="2026-02-26 22:15:52.43885147 +0000 UTC m=+1237.518342011" observedRunningTime="2026-02-26 22:15:55.038853433 +0000 UTC m=+1240.118343974" watchObservedRunningTime="2026-02-26 22:15:55.05025012 +0000 UTC m=+1240.129740671" Feb 26 22:15:55 crc kubenswrapper[4910]: I0226 22:15:55.078134 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=17.456476405 podStartE2EDuration="23.078111116s" podCreationTimestamp="2026-02-26 22:15:32 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.591117016 +0000 UTC m=+1230.670607557" lastFinishedPulling="2026-02-26 22:15:51.212751687 +0000 UTC m=+1236.292242268" observedRunningTime="2026-02-26 22:15:55.067186233 +0000 UTC m=+1240.146676794" watchObservedRunningTime="2026-02-26 22:15:55.078111116 +0000 UTC m=+1240.157601657" Feb 26 22:15:55 crc kubenswrapper[4910]: I0226 22:15:55.089510 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=16.273959852 podStartE2EDuration="23.089490063s" podCreationTimestamp="2026-02-26 22:15:32 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.623308008 +0000 UTC m=+1230.702798549" lastFinishedPulling="2026-02-26 22:15:52.438838219 +0000 UTC m=+1237.518328760" observedRunningTime="2026-02-26 22:15:55.085053 +0000 UTC m=+1240.164543541" watchObservedRunningTime="2026-02-26 22:15:55.089490063 +0000 UTC m=+1240.168980604" Feb 26 22:15:56 crc kubenswrapper[4910]: I0226 22:15:56.013542 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"60fb0251-1bd0-4e06-a368-5aceb0afaa87","Type":"ContainerStarted","Data":"b68f7fd66a69d2dc2cf55bfae90177d3da25a9a937040e56e2d6047313f7b8fe"} Feb 26 22:15:56 crc kubenswrapper[4910]: I0226 22:15:56.016062 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" event={"ID":"993d51de-20a2-4cee-856c-f0cbb1b0307d","Type":"ContainerStarted","Data":"53584f8e99337e969b395d777f5c67cea5120065ad23faab6d080570ea090043"} Feb 26 22:15:56 crc kubenswrapper[4910]: I0226 22:15:56.016499 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:15:56 crc kubenswrapper[4910]: I0226 22:15:56.019826 4910 generic.go:334] "Generic (PLEG): container finished" podID="564095ae-5c59-4307-8c7e-87c20aed1b59" containerID="55834ab8d02684cb55366055220a07141f292edbd43dd6801e345ac6fde1b022" exitCode=0 Feb 26 22:15:56 crc kubenswrapper[4910]: I0226 22:15:56.019927 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pqvdk" event={"ID":"564095ae-5c59-4307-8c7e-87c20aed1b59","Type":"ContainerDied","Data":"55834ab8d02684cb55366055220a07141f292edbd43dd6801e345ac6fde1b022"} Feb 26 22:15:56 crc kubenswrapper[4910]: I0226 22:15:56.020935 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:56 crc kubenswrapper[4910]: I0226 22:15:56.031421 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-ljjvh" Feb 26 22:15:56 crc kubenswrapper[4910]: I0226 22:15:56.092062 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" podStartSLOduration=17.079470759 podStartE2EDuration="24.092048193s" podCreationTimestamp="2026-02-26 22:15:32 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.590833028 +0000 UTC m=+1230.670323569" lastFinishedPulling="2026-02-26 22:15:52.603410462 +0000 UTC m=+1237.682901003" observedRunningTime="2026-02-26 22:15:56.085997224 +0000 UTC m=+1241.165487755" watchObservedRunningTime="2026-02-26 22:15:56.092048193 +0000 UTC m=+1241.171538734" Feb 26 22:15:57 crc kubenswrapper[4910]: I0226 22:15:57.030848 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerStarted","Data":"8b868dc2396218530dff7210a284cc3ac6f666370d5b0e57cb6f84461df2591c"} Feb 26 22:15:57 crc kubenswrapper[4910]: I0226 22:15:57.037614 4910 generic.go:334] "Generic (PLEG): container finished" podID="0ede30cd-328a-42f0-94e2-951689df8a77" containerID="0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e" exitCode=0 Feb 26 22:15:57 crc kubenswrapper[4910]: I0226 22:15:57.039155 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" event={"ID":"0ede30cd-328a-42f0-94e2-951689df8a77","Type":"ContainerDied","Data":"0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e"} Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.051199 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hr48l" event={"ID":"1fb71366-0580-4adf-91c1-d0177014808b","Type":"ContainerStarted","Data":"966a8f51a004731dc53a59eb100eedf2defb4181626136d8420426de467e4913"} Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.053951 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cb62085e-02e8-4670-8ff4-dc1a7b242eb8","Type":"ContainerStarted","Data":"c1986c38daae3b6d87dc3a0c6f4c9f5633fe2cfe59ac0850e6ead6b72f21999f"} Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.056481 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" event={"ID":"0ede30cd-328a-42f0-94e2-951689df8a77","Type":"ContainerStarted","Data":"f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59"} Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.056615 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.059098 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"17efae9f-593e-4c9f-8803-9090fff6e616","Type":"ContainerStarted","Data":"1bcc7a0ddb8b3557a79b99cdeb36d9c0415da726773f8785d1cf9aafad6fbbd2"} Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.062782 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pqvdk" event={"ID":"564095ae-5c59-4307-8c7e-87c20aed1b59","Type":"ContainerStarted","Data":"ff1530ecd2f227532ae5f2b6ed55cfcbdc67cc82d5c4263e3d1fe24136548a10"} Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.077100 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hr48l" podStartSLOduration=3.562632222 podStartE2EDuration="7.07707455s" podCreationTimestamp="2026-02-26 22:15:51 +0000 UTC" firstStartedPulling="2026-02-26 22:15:53.209325436 +0000 UTC m=+1238.288815977" lastFinishedPulling="2026-02-26 22:15:56.723767754 +0000 UTC m=+1241.803258305" observedRunningTime="2026-02-26 22:15:58.0727584 +0000 UTC m=+1243.152248981" watchObservedRunningTime="2026-02-26 22:15:58.07707455 +0000 UTC m=+1243.156565131" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.105565 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" podStartSLOduration=6.550688943 podStartE2EDuration="7.105544274s" podCreationTimestamp="2026-02-26 22:15:51 +0000 UTC" firstStartedPulling="2026-02-26 22:15:53.497021067 +0000 UTC m=+1238.576511618" lastFinishedPulling="2026-02-26 22:15:54.051876418 +0000 UTC m=+1239.131366949" observedRunningTime="2026-02-26 22:15:58.102015795 +0000 UTC m=+1243.181506346" watchObservedRunningTime="2026-02-26 22:15:58.105544274 +0000 UTC m=+1243.185034825" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.129907 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.968533165 podStartE2EDuration="32.129868721s" podCreationTimestamp="2026-02-26 22:15:26 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.568934269 +0000 UTC m=+1230.648424810" lastFinishedPulling="2026-02-26 22:15:56.730269825 +0000 UTC m=+1241.809760366" observedRunningTime="2026-02-26 22:15:58.127115804 +0000 UTC m=+1243.206606355" watchObservedRunningTime="2026-02-26 22:15:58.129868721 +0000 UTC m=+1243.209359272" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.152685 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-pqvdk" podStartSLOduration=6.688179051 podStartE2EDuration="7.152668185s" podCreationTimestamp="2026-02-26 22:15:51 +0000 UTC" firstStartedPulling="2026-02-26 22:15:53.417777681 +0000 UTC m=+1238.497268222" lastFinishedPulling="2026-02-26 22:15:53.882266815 +0000 UTC m=+1238.961757356" observedRunningTime="2026-02-26 22:15:58.151189444 +0000 UTC m=+1243.230680005" watchObservedRunningTime="2026-02-26 22:15:58.152668185 +0000 UTC m=+1243.232158736" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.175971 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.280228372 podStartE2EDuration="28.175951534s" podCreationTimestamp="2026-02-26 22:15:30 +0000 UTC" firstStartedPulling="2026-02-26 22:15:44.916005762 +0000 UTC m=+1229.995496313" lastFinishedPulling="2026-02-26 22:15:56.811728934 +0000 UTC m=+1241.891219475" observedRunningTime="2026-02-26 22:15:58.175872602 +0000 UTC m=+1243.255363163" watchObservedRunningTime="2026-02-26 22:15:58.175951534 +0000 UTC m=+1243.255442085" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.399943 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.399991 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.447706 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.843738 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:58 crc kubenswrapper[4910]: I0226 22:15:58.921819 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.074854 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.074942 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.152098 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.162824 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.843469 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.843819 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.844335 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.845725 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.849271 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.849424 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.849842 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-h5h9h" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.849972 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.882517 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.926865 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac50df3e-b0e1-432e-9749-95c00d5a6281-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.926928 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac50df3e-b0e1-432e-9749-95c00d5a6281-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.926990 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50df3e-b0e1-432e-9749-95c00d5a6281-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.927076 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac50df3e-b0e1-432e-9749-95c00d5a6281-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.927103 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac50df3e-b0e1-432e-9749-95c00d5a6281-config\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.927134 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac50df3e-b0e1-432e-9749-95c00d5a6281-scripts\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.927283 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljl2c\" (UniqueName: \"kubernetes.io/projected/ac50df3e-b0e1-432e-9749-95c00d5a6281-kube-api-access-ljl2c\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:15:59 crc kubenswrapper[4910]: I0226 22:15:59.960754 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.028982 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac50df3e-b0e1-432e-9749-95c00d5a6281-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.029042 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac50df3e-b0e1-432e-9749-95c00d5a6281-config\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.029078 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac50df3e-b0e1-432e-9749-95c00d5a6281-scripts\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.029248 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljl2c\" (UniqueName: \"kubernetes.io/projected/ac50df3e-b0e1-432e-9749-95c00d5a6281-kube-api-access-ljl2c\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.029297 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac50df3e-b0e1-432e-9749-95c00d5a6281-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.029329 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac50df3e-b0e1-432e-9749-95c00d5a6281-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.029370 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50df3e-b0e1-432e-9749-95c00d5a6281-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.030448 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac50df3e-b0e1-432e-9749-95c00d5a6281-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.030609 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac50df3e-b0e1-432e-9749-95c00d5a6281-config\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.030687 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac50df3e-b0e1-432e-9749-95c00d5a6281-scripts\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.034378 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50df3e-b0e1-432e-9749-95c00d5a6281-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.040314 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac50df3e-b0e1-432e-9749-95c00d5a6281-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.047765 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac50df3e-b0e1-432e-9749-95c00d5a6281-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.048050 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljl2c\" (UniqueName: \"kubernetes.io/projected/ac50df3e-b0e1-432e-9749-95c00d5a6281-kube-api-access-ljl2c\") pod \"ovn-northd-0\" (UID: \"ac50df3e-b0e1-432e-9749-95c00d5a6281\") " pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.083609 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6tz7l" event={"ID":"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed","Type":"ContainerStarted","Data":"2708cbdeed5235b809b4cd949e280f4d191c15f9af2ac5a665ad5948bfbdfeb0"} Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.145043 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535736-jb2nz"] Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.146505 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535736-jb2nz" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.151651 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.151668 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.151945 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.163708 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535736-jb2nz"] Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.180676 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.196457 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.240892 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrmt\" (UniqueName: \"kubernetes.io/projected/99d069cf-0b32-4927-ba6c-367ce8cfa0c5-kube-api-access-fkrmt\") pod \"auto-csr-approver-29535736-jb2nz\" (UID: \"99d069cf-0b32-4927-ba6c-367ce8cfa0c5\") " pod="openshift-infra/auto-csr-approver-29535736-jb2nz" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.343063 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrmt\" (UniqueName: \"kubernetes.io/projected/99d069cf-0b32-4927-ba6c-367ce8cfa0c5-kube-api-access-fkrmt\") pod \"auto-csr-approver-29535736-jb2nz\" (UID: \"99d069cf-0b32-4927-ba6c-367ce8cfa0c5\") " pod="openshift-infra/auto-csr-approver-29535736-jb2nz" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.367537 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrmt\" (UniqueName: \"kubernetes.io/projected/99d069cf-0b32-4927-ba6c-367ce8cfa0c5-kube-api-access-fkrmt\") pod \"auto-csr-approver-29535736-jb2nz\" (UID: \"99d069cf-0b32-4927-ba6c-367ce8cfa0c5\") " pod="openshift-infra/auto-csr-approver-29535736-jb2nz" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.474297 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535736-jb2nz" Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.663682 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 22:16:00 crc kubenswrapper[4910]: W0226 22:16:00.669204 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac50df3e_b0e1_432e_9749_95c00d5a6281.slice/crio-3a2f6f9220df12aaa737aba6b0a28aac9814ea8dabdb22c61dfc68e960799796 WatchSource:0}: Error finding container 3a2f6f9220df12aaa737aba6b0a28aac9814ea8dabdb22c61dfc68e960799796: Status 404 returned error can't find the container with id 3a2f6f9220df12aaa737aba6b0a28aac9814ea8dabdb22c61dfc68e960799796 Feb 26 22:16:00 crc kubenswrapper[4910]: I0226 22:16:00.921800 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535736-jb2nz"] Feb 26 22:16:00 crc kubenswrapper[4910]: W0226 22:16:00.926979 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99d069cf_0b32_4927_ba6c_367ce8cfa0c5.slice/crio-9e8bb315ceade270e7bd830e72c0f644692aae350e508e31551b32097734b5f3 WatchSource:0}: Error finding container 9e8bb315ceade270e7bd830e72c0f644692aae350e508e31551b32097734b5f3: Status 404 returned error can't find the container with id 9e8bb315ceade270e7bd830e72c0f644692aae350e508e31551b32097734b5f3 Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.098745 4910 generic.go:334] "Generic (PLEG): container finished" podID="3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed" containerID="2708cbdeed5235b809b4cd949e280f4d191c15f9af2ac5a665ad5948bfbdfeb0" exitCode=0 Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.098825 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6tz7l" event={"ID":"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed","Type":"ContainerDied","Data":"2708cbdeed5235b809b4cd949e280f4d191c15f9af2ac5a665ad5948bfbdfeb0"} Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.102590 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535736-jb2nz" event={"ID":"99d069cf-0b32-4927-ba6c-367ce8cfa0c5","Type":"ContainerStarted","Data":"9e8bb315ceade270e7bd830e72c0f644692aae350e508e31551b32097734b5f3"} Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.103720 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac50df3e-b0e1-432e-9749-95c00d5a6281","Type":"ContainerStarted","Data":"3a2f6f9220df12aaa737aba6b0a28aac9814ea8dabdb22c61dfc68e960799796"} Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.257758 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.258893 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.359484 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.512968 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.780855 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bf28-account-create-update-8q7x7"] Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.782645 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.785121 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.789279 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xztls"] Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.790918 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xztls" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.799344 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bf28-account-create-update-8q7x7"] Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.824477 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xztls"] Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.907811 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e4d9b8-abdb-4f3c-8e33-334917a86288-operator-scripts\") pod \"glance-db-create-xztls\" (UID: \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\") " pod="openstack/glance-db-create-xztls" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.907871 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ca2c64-13ce-46ee-be2e-a54d05e5a626-operator-scripts\") pod \"glance-bf28-account-create-update-8q7x7\" (UID: \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\") " pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.907899 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mb9m\" (UniqueName: \"kubernetes.io/projected/d3e4d9b8-abdb-4f3c-8e33-334917a86288-kube-api-access-9mb9m\") pod \"glance-db-create-xztls\" (UID: \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\") " pod="openstack/glance-db-create-xztls" Feb 26 22:16:01 crc kubenswrapper[4910]: I0226 22:16:01.907934 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b892z\" (UniqueName: \"kubernetes.io/projected/32ca2c64-13ce-46ee-be2e-a54d05e5a626-kube-api-access-b892z\") pod \"glance-bf28-account-create-update-8q7x7\" (UID: \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\") " pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.009596 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e4d9b8-abdb-4f3c-8e33-334917a86288-operator-scripts\") pod \"glance-db-create-xztls\" (UID: \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\") " pod="openstack/glance-db-create-xztls" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.009657 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ca2c64-13ce-46ee-be2e-a54d05e5a626-operator-scripts\") pod \"glance-bf28-account-create-update-8q7x7\" (UID: \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\") " pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.009697 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mb9m\" (UniqueName: \"kubernetes.io/projected/d3e4d9b8-abdb-4f3c-8e33-334917a86288-kube-api-access-9mb9m\") pod \"glance-db-create-xztls\" (UID: \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\") " pod="openstack/glance-db-create-xztls" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.009795 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b892z\" (UniqueName: \"kubernetes.io/projected/32ca2c64-13ce-46ee-be2e-a54d05e5a626-kube-api-access-b892z\") pod \"glance-bf28-account-create-update-8q7x7\" (UID: \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\") " pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.011809 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e4d9b8-abdb-4f3c-8e33-334917a86288-operator-scripts\") pod \"glance-db-create-xztls\" (UID: \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\") " pod="openstack/glance-db-create-xztls" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.012428 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ca2c64-13ce-46ee-be2e-a54d05e5a626-operator-scripts\") pod \"glance-bf28-account-create-update-8q7x7\" (UID: \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\") " pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.028816 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b892z\" (UniqueName: \"kubernetes.io/projected/32ca2c64-13ce-46ee-be2e-a54d05e5a626-kube-api-access-b892z\") pod \"glance-bf28-account-create-update-8q7x7\" (UID: \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\") " pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.033987 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mb9m\" (UniqueName: \"kubernetes.io/projected/d3e4d9b8-abdb-4f3c-8e33-334917a86288-kube-api-access-9mb9m\") pod \"glance-db-create-xztls\" (UID: \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\") " pod="openstack/glance-db-create-xztls" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.116583 4910 generic.go:334] "Generic (PLEG): container finished" podID="60fb0251-1bd0-4e06-a368-5aceb0afaa87" containerID="b68f7fd66a69d2dc2cf55bfae90177d3da25a9a937040e56e2d6047313f7b8fe" exitCode=0 Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.116653 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"60fb0251-1bd0-4e06-a368-5aceb0afaa87","Type":"ContainerDied","Data":"b68f7fd66a69d2dc2cf55bfae90177d3da25a9a937040e56e2d6047313f7b8fe"} Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.122938 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac50df3e-b0e1-432e-9749-95c00d5a6281","Type":"ContainerStarted","Data":"db717c3ab1c3d3f8b15217144667e44dd38f42f44a3b435dbc5c3625ab48fa7d"} Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.125801 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6tz7l" event={"ID":"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed","Type":"ContainerStarted","Data":"d3e4fad330eb48bfbda72cec3477c66b99d2859040d080958ea38a72e03c6f47"} Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.125840 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6tz7l" event={"ID":"3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed","Type":"ContainerStarted","Data":"e8c73e3eddee480905192ce5b49a8e301f06e4c08cdc691e6c041a9b561ae903"} Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.126574 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.126615 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.158749 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6tz7l" podStartSLOduration=21.40812517 podStartE2EDuration="35.158732264s" podCreationTimestamp="2026-02-26 22:15:27 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.658886461 +0000 UTC m=+1230.738376992" lastFinishedPulling="2026-02-26 22:15:59.409493505 +0000 UTC m=+1244.488984086" observedRunningTime="2026-02-26 22:16:02.149770965 +0000 UTC m=+1247.229261506" watchObservedRunningTime="2026-02-26 22:16:02.158732264 +0000 UTC m=+1247.238222805" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.215397 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.222823 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xztls" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.224805 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.315316 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.368849 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxtrr"] Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.369568 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" podUID="0ede30cd-328a-42f0-94e2-951689df8a77" containerName="dnsmasq-dns" containerID="cri-o://f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59" gracePeriod=10 Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.373381 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.447758 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9792f"] Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.449050 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9792f" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.457340 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9792f"] Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.518875 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47034-85d8-4a6d-983b-cf69ea88a122-operator-scripts\") pod \"keystone-db-create-9792f\" (UID: \"06a47034-85d8-4a6d-983b-cf69ea88a122\") " pod="openstack/keystone-db-create-9792f" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.519217 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vm5\" (UniqueName: \"kubernetes.io/projected/06a47034-85d8-4a6d-983b-cf69ea88a122-kube-api-access-p8vm5\") pod \"keystone-db-create-9792f\" (UID: \"06a47034-85d8-4a6d-983b-cf69ea88a122\") " pod="openstack/keystone-db-create-9792f" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.567428 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-78c5-account-create-update-cxn9r"] Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.571406 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.580783 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.585342 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-78c5-account-create-update-cxn9r"] Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.620060 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47034-85d8-4a6d-983b-cf69ea88a122-operator-scripts\") pod \"keystone-db-create-9792f\" (UID: \"06a47034-85d8-4a6d-983b-cf69ea88a122\") " pod="openstack/keystone-db-create-9792f" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.620325 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vm5\" (UniqueName: \"kubernetes.io/projected/06a47034-85d8-4a6d-983b-cf69ea88a122-kube-api-access-p8vm5\") pod \"keystone-db-create-9792f\" (UID: \"06a47034-85d8-4a6d-983b-cf69ea88a122\") " pod="openstack/keystone-db-create-9792f" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.621241 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47034-85d8-4a6d-983b-cf69ea88a122-operator-scripts\") pod \"keystone-db-create-9792f\" (UID: \"06a47034-85d8-4a6d-983b-cf69ea88a122\") " pod="openstack/keystone-db-create-9792f" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.750145 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8clr\" (UniqueName: \"kubernetes.io/projected/e0775bf3-efcb-489c-acfb-5bd1ee95391a-kube-api-access-x8clr\") pod \"keystone-78c5-account-create-update-cxn9r\" (UID: \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\") " pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.750227 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0775bf3-efcb-489c-acfb-5bd1ee95391a-operator-scripts\") pod \"keystone-78c5-account-create-update-cxn9r\" (UID: \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\") " pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.785439 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vm5\" (UniqueName: \"kubernetes.io/projected/06a47034-85d8-4a6d-983b-cf69ea88a122-kube-api-access-p8vm5\") pod \"keystone-db-create-9792f\" (UID: \"06a47034-85d8-4a6d-983b-cf69ea88a122\") " pod="openstack/keystone-db-create-9792f" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.804539 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-qt2rz"] Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.805806 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.820636 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qt2rz"] Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.831242 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6328-account-create-update-r69jh"] Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.832515 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.844474 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.847956 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6328-account-create-update-r69jh"] Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.851573 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8clr\" (UniqueName: \"kubernetes.io/projected/e0775bf3-efcb-489c-acfb-5bd1ee95391a-kube-api-access-x8clr\") pod \"keystone-78c5-account-create-update-cxn9r\" (UID: \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\") " pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.851606 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0775bf3-efcb-489c-acfb-5bd1ee95391a-operator-scripts\") pod \"keystone-78c5-account-create-update-cxn9r\" (UID: \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\") " pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.852227 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0775bf3-efcb-489c-acfb-5bd1ee95391a-operator-scripts\") pod \"keystone-78c5-account-create-update-cxn9r\" (UID: \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\") " pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.873068 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9792f" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.876365 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8clr\" (UniqueName: \"kubernetes.io/projected/e0775bf3-efcb-489c-acfb-5bd1ee95391a-kube-api-access-x8clr\") pod \"keystone-78c5-account-create-update-cxn9r\" (UID: \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\") " pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:02 crc kubenswrapper[4910]: I0226 22:16:02.921786 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:02.953614 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjwlq\" (UniqueName: \"kubernetes.io/projected/2a08a89c-9d68-445e-bd75-be72757906a6-kube-api-access-pjwlq\") pod \"placement-6328-account-create-update-r69jh\" (UID: \"2a08a89c-9d68-445e-bd75-be72757906a6\") " pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:02.953878 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmwrf\" (UniqueName: \"kubernetes.io/projected/0ec61180-421b-4df0-8cd0-5cc207b4a179-kube-api-access-zmwrf\") pod \"placement-db-create-qt2rz\" (UID: \"0ec61180-421b-4df0-8cd0-5cc207b4a179\") " pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:02.953948 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a08a89c-9d68-445e-bd75-be72757906a6-operator-scripts\") pod \"placement-6328-account-create-update-r69jh\" (UID: \"2a08a89c-9d68-445e-bd75-be72757906a6\") " pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:02.954041 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ec61180-421b-4df0-8cd0-5cc207b4a179-operator-scripts\") pod \"placement-db-create-qt2rz\" (UID: \"0ec61180-421b-4df0-8cd0-5cc207b4a179\") " pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.061458 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmwrf\" (UniqueName: \"kubernetes.io/projected/0ec61180-421b-4df0-8cd0-5cc207b4a179-kube-api-access-zmwrf\") pod \"placement-db-create-qt2rz\" (UID: \"0ec61180-421b-4df0-8cd0-5cc207b4a179\") " pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.061725 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a08a89c-9d68-445e-bd75-be72757906a6-operator-scripts\") pod \"placement-6328-account-create-update-r69jh\" (UID: \"2a08a89c-9d68-445e-bd75-be72757906a6\") " pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.061777 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ec61180-421b-4df0-8cd0-5cc207b4a179-operator-scripts\") pod \"placement-db-create-qt2rz\" (UID: \"0ec61180-421b-4df0-8cd0-5cc207b4a179\") " pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.061810 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjwlq\" (UniqueName: \"kubernetes.io/projected/2a08a89c-9d68-445e-bd75-be72757906a6-kube-api-access-pjwlq\") pod \"placement-6328-account-create-update-r69jh\" (UID: \"2a08a89c-9d68-445e-bd75-be72757906a6\") " pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.062007 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xztls"] Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.062724 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a08a89c-9d68-445e-bd75-be72757906a6-operator-scripts\") pod \"placement-6328-account-create-update-r69jh\" (UID: \"2a08a89c-9d68-445e-bd75-be72757906a6\") " pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.063067 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ec61180-421b-4df0-8cd0-5cc207b4a179-operator-scripts\") pod \"placement-db-create-qt2rz\" (UID: \"0ec61180-421b-4df0-8cd0-5cc207b4a179\") " pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.078166 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bf28-account-create-update-8q7x7"] Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.084707 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjwlq\" (UniqueName: \"kubernetes.io/projected/2a08a89c-9d68-445e-bd75-be72757906a6-kube-api-access-pjwlq\") pod \"placement-6328-account-create-update-r69jh\" (UID: \"2a08a89c-9d68-445e-bd75-be72757906a6\") " pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.086649 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmwrf\" (UniqueName: \"kubernetes.io/projected/0ec61180-421b-4df0-8cd0-5cc207b4a179-kube-api-access-zmwrf\") pod \"placement-db-create-qt2rz\" (UID: \"0ec61180-421b-4df0-8cd0-5cc207b4a179\") " pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:06 crc kubenswrapper[4910]: W0226 22:16:03.118398 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32ca2c64_13ce_46ee_be2e_a54d05e5a626.slice/crio-9f010bf683a28cec373549c1138842591a375469bb3f5d51e5e4b931619d0056 WatchSource:0}: Error finding container 9f010bf683a28cec373549c1138842591a375469bb3f5d51e5e4b931619d0056: Status 404 returned error can't find the container with id 9f010bf683a28cec373549c1138842591a375469bb3f5d51e5e4b931619d0056 Feb 26 22:16:06 crc kubenswrapper[4910]: W0226 22:16:03.118879 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3e4d9b8_abdb_4f3c_8e33_334917a86288.slice/crio-57bc770047323cfacb6795a5feaaa7605b4e8b3501b618e96fd6ebe85dad83b9 WatchSource:0}: Error finding container 57bc770047323cfacb6795a5feaaa7605b4e8b3501b618e96fd6ebe85dad83b9: Status 404 returned error can't find the container with id 57bc770047323cfacb6795a5feaaa7605b4e8b3501b618e96fd6ebe85dad83b9 Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.125082 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.137363 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac50df3e-b0e1-432e-9749-95c00d5a6281","Type":"ContainerStarted","Data":"b7c8b08a2343b84f528c228293affc3db51abd6f0bfe5b175468051050edf29e"} Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.137453 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.139098 4910 generic.go:334] "Generic (PLEG): container finished" podID="0ede30cd-328a-42f0-94e2-951689df8a77" containerID="f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59" exitCode=0 Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.139255 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" event={"ID":"0ede30cd-328a-42f0-94e2-951689df8a77","Type":"ContainerDied","Data":"f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59"} Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.139283 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" event={"ID":"0ede30cd-328a-42f0-94e2-951689df8a77","Type":"ContainerDied","Data":"ff8879c0c5273e16512ffd57e5e70068a196cbb40f5b080fb85d1da50ea8ee9f"} Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.139325 4910 scope.go:117] "RemoveContainer" containerID="f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.139453 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mxtrr" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.143645 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xztls" event={"ID":"d3e4d9b8-abdb-4f3c-8e33-334917a86288","Type":"ContainerStarted","Data":"57bc770047323cfacb6795a5feaaa7605b4e8b3501b618e96fd6ebe85dad83b9"} Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.152202 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.159923 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bf28-account-create-update-8q7x7" event={"ID":"32ca2c64-13ce-46ee-be2e-a54d05e5a626","Type":"ContainerStarted","Data":"9f010bf683a28cec373549c1138842591a375469bb3f5d51e5e4b931619d0056"} Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.163905 4910 generic.go:334] "Generic (PLEG): container finished" podID="99d069cf-0b32-4927-ba6c-367ce8cfa0c5" containerID="87000a89b959093e720699295fffbff1d763fe99a0f9c28ab10c20d241ef10b9" exitCode=0 Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.163965 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535736-jb2nz" event={"ID":"99d069cf-0b32-4927-ba6c-367ce8cfa0c5","Type":"ContainerDied","Data":"87000a89b959093e720699295fffbff1d763fe99a0f9c28ab10c20d241ef10b9"} Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.166016 4910 generic.go:334] "Generic (PLEG): container finished" podID="2f98425b-65de-48d2-be21-2c443218eacd" containerID="8b868dc2396218530dff7210a284cc3ac6f666370d5b0e57cb6f84461df2591c" exitCode=0 Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.166212 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerDied","Data":"8b868dc2396218530dff7210a284cc3ac6f666370d5b0e57cb6f84461df2591c"} Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.173361 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.174087 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.033977201 podStartE2EDuration="4.1740704s" podCreationTimestamp="2026-02-26 22:15:59 +0000 UTC" firstStartedPulling="2026-02-26 22:16:00.671875789 +0000 UTC m=+1245.751366330" lastFinishedPulling="2026-02-26 22:16:01.811968988 +0000 UTC m=+1246.891459529" observedRunningTime="2026-02-26 22:16:03.172221788 +0000 UTC m=+1248.251712329" watchObservedRunningTime="2026-02-26 22:16:03.1740704 +0000 UTC m=+1248.253560941" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.199477 4910 scope.go:117] "RemoveContainer" containerID="0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.264450 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-config\") pod \"0ede30cd-328a-42f0-94e2-951689df8a77\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.264501 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-dns-svc\") pod \"0ede30cd-328a-42f0-94e2-951689df8a77\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.264543 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj6qr\" (UniqueName: \"kubernetes.io/projected/0ede30cd-328a-42f0-94e2-951689df8a77-kube-api-access-vj6qr\") pod \"0ede30cd-328a-42f0-94e2-951689df8a77\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.264652 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-ovsdbserver-nb\") pod \"0ede30cd-328a-42f0-94e2-951689df8a77\" (UID: \"0ede30cd-328a-42f0-94e2-951689df8a77\") " Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.268769 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ede30cd-328a-42f0-94e2-951689df8a77-kube-api-access-vj6qr" (OuterVolumeSpecName: "kube-api-access-vj6qr") pod "0ede30cd-328a-42f0-94e2-951689df8a77" (UID: "0ede30cd-328a-42f0-94e2-951689df8a77"). InnerVolumeSpecName "kube-api-access-vj6qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.279639 4910 scope.go:117] "RemoveContainer" containerID="f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59" Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:03.285326 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59\": container with ID starting with f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59 not found: ID does not exist" containerID="f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.285363 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59"} err="failed to get container status \"f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59\": rpc error: code = NotFound desc = could not find container \"f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59\": container with ID starting with f3d9b2c32ca92b0a07f1c38bd18a02c6e7c3639349d82c361728b9d2e74ecf59 not found: ID does not exist" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.285388 4910 scope.go:117] "RemoveContainer" containerID="0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e" Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:03.286020 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e\": container with ID starting with 0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e not found: ID does not exist" containerID="0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.286078 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e"} err="failed to get container status \"0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e\": rpc error: code = NotFound desc = could not find container \"0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e\": container with ID starting with 0d77eef18cb5ccd89e85167c8dcd08c77c033ea8f070cea52250c3cb92d2172e not found: ID does not exist" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.327982 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-config" (OuterVolumeSpecName: "config") pod "0ede30cd-328a-42f0-94e2-951689df8a77" (UID: "0ede30cd-328a-42f0-94e2-951689df8a77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.342521 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ede30cd-328a-42f0-94e2-951689df8a77" (UID: "0ede30cd-328a-42f0-94e2-951689df8a77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.360111 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ede30cd-328a-42f0-94e2-951689df8a77" (UID: "0ede30cd-328a-42f0-94e2-951689df8a77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.368656 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.368694 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.368707 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ede30cd-328a-42f0-94e2-951689df8a77-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.368721 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj6qr\" (UniqueName: \"kubernetes.io/projected/0ede30cd-328a-42f0-94e2-951689df8a77-kube-api-access-vj6qr\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.472864 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxtrr"] Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.480150 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxtrr"] Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.587454 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.724867 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mwkqc"] Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:03.725648 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ede30cd-328a-42f0-94e2-951689df8a77" containerName="dnsmasq-dns" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.725660 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ede30cd-328a-42f0-94e2-951689df8a77" containerName="dnsmasq-dns" Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:03.726223 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ede30cd-328a-42f0-94e2-951689df8a77" containerName="init" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.726236 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ede30cd-328a-42f0-94e2-951689df8a77" containerName="init" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.726439 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ede30cd-328a-42f0-94e2-951689df8a77" containerName="dnsmasq-dns" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.734245 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.746131 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mwkqc"] Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.881387 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-config\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.881448 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjv4p\" (UniqueName: \"kubernetes.io/projected/c41a76f7-e78e-400a-b92b-aa690d360c6e-kube-api-access-zjv4p\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.881622 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.881674 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.881725 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.918731 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ede30cd-328a-42f0-94e2-951689df8a77" path="/var/lib/kubelet/pods/0ede30cd-328a-42f0-94e2-951689df8a77/volumes" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.983121 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-config\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.983181 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjv4p\" (UniqueName: \"kubernetes.io/projected/c41a76f7-e78e-400a-b92b-aa690d360c6e-kube-api-access-zjv4p\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.983305 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.983336 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.983356 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.984120 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.984627 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-config\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.985093 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:03.985817 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.022410 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjv4p\" (UniqueName: \"kubernetes.io/projected/c41a76f7-e78e-400a-b92b-aa690d360c6e-kube-api-access-zjv4p\") pod \"dnsmasq-dns-b8fbc5445-mwkqc\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.079184 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.227113 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bf28-account-create-update-8q7x7" event={"ID":"32ca2c64-13ce-46ee-be2e-a54d05e5a626","Type":"ContainerStarted","Data":"f0ca4eed2eed2a85f5e4f1bac0a3304daaa4aa354181440a9bb63b2489348e85"} Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.835436 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.843143 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.847866 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.848066 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vpxlx" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.848208 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.848313 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.861626 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.902828 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.902897 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7jfd\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-kube-api-access-j7jfd\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.902986 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/30b027eb-e942-4121-aebc-776d616b902e-lock\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.903032 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b027eb-e942-4121-aebc-776d616b902e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.903107 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/30b027eb-e942-4121-aebc-776d616b902e-cache\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:04.903183 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-722f5850-8684-41c0-92d0-3ee14d681ab9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-722f5850-8684-41c0-92d0-3ee14d681ab9\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.005385 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7jfd\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-kube-api-access-j7jfd\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.005429 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.005455 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/30b027eb-e942-4121-aebc-776d616b902e-lock\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.005472 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b027eb-e942-4121-aebc-776d616b902e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.005511 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/30b027eb-e942-4121-aebc-776d616b902e-cache\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.005566 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-722f5850-8684-41c0-92d0-3ee14d681ab9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-722f5850-8684-41c0-92d0-3ee14d681ab9\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.006624 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/30b027eb-e942-4121-aebc-776d616b902e-lock\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.006856 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/30b027eb-e942-4121-aebc-776d616b902e-cache\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:05.006896 4910 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:05.006908 4910 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:05.006940 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift podName:30b027eb-e942-4121-aebc-776d616b902e nodeName:}" failed. No retries permitted until 2026-02-26 22:16:05.50692713 +0000 UTC m=+1250.586417671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift") pod "swift-storage-0" (UID: "30b027eb-e942-4121-aebc-776d616b902e") : configmap "swift-ring-files" not found Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.013361 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.013404 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-722f5850-8684-41c0-92d0-3ee14d681ab9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-722f5850-8684-41c0-92d0-3ee14d681ab9\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8cc666c39bd45fb3076e93621a33b987b451b8a731fabd14578730f13c595178/globalmount\"" pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.021094 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b027eb-e942-4121-aebc-776d616b902e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.042975 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7jfd\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-kube-api-access-j7jfd\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.061721 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-722f5850-8684-41c0-92d0-3ee14d681ab9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-722f5850-8684-41c0-92d0-3ee14d681ab9\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.091653 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cgwcw"] Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.093030 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.095089 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.095700 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.095734 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.112638 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cgwcw"] Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.209023 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-swiftconf\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.209069 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-dispersionconf\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.209253 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-ring-data-devices\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.209310 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-etc-swift\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.209445 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-combined-ca-bundle\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.209557 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpdg7\" (UniqueName: \"kubernetes.io/projected/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-kube-api-access-fpdg7\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.209577 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-scripts\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.311715 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-swiftconf\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.311763 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-dispersionconf\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.311819 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-ring-data-devices\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.311846 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-etc-swift\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.311895 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-combined-ca-bundle\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.311935 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpdg7\" (UniqueName: \"kubernetes.io/projected/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-kube-api-access-fpdg7\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.311955 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-scripts\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.312979 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-etc-swift\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.313142 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-scripts\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.313453 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-ring-data-devices\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.315791 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-swiftconf\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.315888 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-combined-ca-bundle\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.317111 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-dispersionconf\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.335204 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpdg7\" (UniqueName: \"kubernetes.io/projected/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-kube-api-access-fpdg7\") pod \"swift-ring-rebalance-cgwcw\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.468750 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:05.516371 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:05.516557 4910 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:05.516573 4910 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:05.516634 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift podName:30b027eb-e942-4121-aebc-776d616b902e nodeName:}" failed. No retries permitted until 2026-02-26 22:16:06.516616164 +0000 UTC m=+1251.596106705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift") pod "swift-storage-0" (UID: "30b027eb-e942-4121-aebc-776d616b902e") : configmap "swift-ring-files" not found Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:06.247449 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xztls" event={"ID":"d3e4d9b8-abdb-4f3c-8e33-334917a86288","Type":"ContainerStarted","Data":"4cfc3fec06de1a50a8b91b6b283814165f4cd05033370fbc806b60f1e586810e"} Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:06.277566 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bf28-account-create-update-8q7x7" podStartSLOduration=5.277539533 podStartE2EDuration="5.277539533s" podCreationTimestamp="2026-02-26 22:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:06.266570598 +0000 UTC m=+1251.346061209" watchObservedRunningTime="2026-02-26 22:16:06.277539533 +0000 UTC m=+1251.357030114" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:06.295286 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-xztls" podStartSLOduration=5.295265517 podStartE2EDuration="5.295265517s" podCreationTimestamp="2026-02-26 22:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:06.287285405 +0000 UTC m=+1251.366775956" watchObservedRunningTime="2026-02-26 22:16:06.295265517 +0000 UTC m=+1251.374756088" Feb 26 22:16:06 crc kubenswrapper[4910]: I0226 22:16:06.545456 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:06.545705 4910 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:06.545717 4910 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 22:16:06 crc kubenswrapper[4910]: E0226 22:16:06.545754 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift podName:30b027eb-e942-4121-aebc-776d616b902e nodeName:}" failed. No retries permitted until 2026-02-26 22:16:08.545741362 +0000 UTC m=+1253.625231893 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift") pod "swift-storage-0" (UID: "30b027eb-e942-4121-aebc-776d616b902e") : configmap "swift-ring-files" not found Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.257592 4910 generic.go:334] "Generic (PLEG): container finished" podID="32ca2c64-13ce-46ee-be2e-a54d05e5a626" containerID="f0ca4eed2eed2a85f5e4f1bac0a3304daaa4aa354181440a9bb63b2489348e85" exitCode=0 Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.257697 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bf28-account-create-update-8q7x7" event={"ID":"32ca2c64-13ce-46ee-be2e-a54d05e5a626","Type":"ContainerDied","Data":"f0ca4eed2eed2a85f5e4f1bac0a3304daaa4aa354181440a9bb63b2489348e85"} Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.266665 4910 generic.go:334] "Generic (PLEG): container finished" podID="d3e4d9b8-abdb-4f3c-8e33-334917a86288" containerID="4cfc3fec06de1a50a8b91b6b283814165f4cd05033370fbc806b60f1e586810e" exitCode=0 Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.266700 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xztls" event={"ID":"d3e4d9b8-abdb-4f3c-8e33-334917a86288","Type":"ContainerDied","Data":"4cfc3fec06de1a50a8b91b6b283814165f4cd05033370fbc806b60f1e586810e"} Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.496423 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cgwcw"] Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.507244 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6328-account-create-update-r69jh"] Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.515317 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qt2rz"] Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.527340 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mwkqc"] Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.542430 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-78c5-account-create-update-cxn9r"] Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.551213 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9792f"] Feb 26 22:16:07 crc kubenswrapper[4910]: W0226 22:16:07.688949 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a08a89c_9d68_445e_bd75_be72757906a6.slice/crio-c8b277521500e898c58e1fa57550dd5c8e802a46a5c1549191ecd80997f12f68 WatchSource:0}: Error finding container c8b277521500e898c58e1fa57550dd5c8e802a46a5c1549191ecd80997f12f68: Status 404 returned error can't find the container with id c8b277521500e898c58e1fa57550dd5c8e802a46a5c1549191ecd80997f12f68 Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.793777 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535736-jb2nz" Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.895681 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkrmt\" (UniqueName: \"kubernetes.io/projected/99d069cf-0b32-4927-ba6c-367ce8cfa0c5-kube-api-access-fkrmt\") pod \"99d069cf-0b32-4927-ba6c-367ce8cfa0c5\" (UID: \"99d069cf-0b32-4927-ba6c-367ce8cfa0c5\") " Feb 26 22:16:07 crc kubenswrapper[4910]: I0226 22:16:07.905639 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d069cf-0b32-4927-ba6c-367ce8cfa0c5-kube-api-access-fkrmt" (OuterVolumeSpecName: "kube-api-access-fkrmt") pod "99d069cf-0b32-4927-ba6c-367ce8cfa0c5" (UID: "99d069cf-0b32-4927-ba6c-367ce8cfa0c5"). InnerVolumeSpecName "kube-api-access-fkrmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.003089 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkrmt\" (UniqueName: \"kubernetes.io/projected/99d069cf-0b32-4927-ba6c-367ce8cfa0c5-kube-api-access-fkrmt\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.279414 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6328-account-create-update-r69jh" event={"ID":"2a08a89c-9d68-445e-bd75-be72757906a6","Type":"ContainerStarted","Data":"8c551a8eaa71eada9aceefbdce018e9a3105b37d91943ef1c1fbdb3fba511f8a"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.279455 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6328-account-create-update-r69jh" event={"ID":"2a08a89c-9d68-445e-bd75-be72757906a6","Type":"ContainerStarted","Data":"c8b277521500e898c58e1fa57550dd5c8e802a46a5c1549191ecd80997f12f68"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.285874 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-78c5-account-create-update-cxn9r" event={"ID":"e0775bf3-efcb-489c-acfb-5bd1ee95391a","Type":"ContainerStarted","Data":"eca0c21d5af38389244cca16895996204e1b81bf95c177256a87f37fe0e03d5b"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.285923 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-78c5-account-create-update-cxn9r" event={"ID":"e0775bf3-efcb-489c-acfb-5bd1ee95391a","Type":"ContainerStarted","Data":"1a8f5b4329a800548ef88968b99dfa552adc75f9fc4eb40b177d6d4b3f995973"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.291380 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535736-jb2nz" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.291394 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535736-jb2nz" event={"ID":"99d069cf-0b32-4927-ba6c-367ce8cfa0c5","Type":"ContainerDied","Data":"9e8bb315ceade270e7bd830e72c0f644692aae350e508e31551b32097734b5f3"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.291439 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8bb315ceade270e7bd830e72c0f644692aae350e508e31551b32097734b5f3" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.296815 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cgwcw" event={"ID":"1d4bb0af-a11e-4f9f-a420-fc07f0220b10","Type":"ContainerStarted","Data":"777486c1abeb992e2c7bbda06afe72ad512c0e30dbbfede86d14f172e0ef2af4"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.299907 4910 generic.go:334] "Generic (PLEG): container finished" podID="c41a76f7-e78e-400a-b92b-aa690d360c6e" containerID="63e2e63d39a768c2cbe7e77b301ced6d207a5bae3f4b822642320a39497bbe94" exitCode=0 Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.300058 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" event={"ID":"c41a76f7-e78e-400a-b92b-aa690d360c6e","Type":"ContainerDied","Data":"63e2e63d39a768c2cbe7e77b301ced6d207a5bae3f4b822642320a39497bbe94"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.300116 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" event={"ID":"c41a76f7-e78e-400a-b92b-aa690d360c6e","Type":"ContainerStarted","Data":"d5bff6e206c463760a5738ce4e612d51a6d2a8807d9b75b010870e61a0305ea7"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.302840 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"60fb0251-1bd0-4e06-a368-5aceb0afaa87","Type":"ContainerStarted","Data":"4291de98d4dd12dd981b87936f3224a11d1bacf1c95858c904d6313ffde367da"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.303034 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6328-account-create-update-r69jh" podStartSLOduration=6.303023298 podStartE2EDuration="6.303023298s" podCreationTimestamp="2026-02-26 22:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:08.293894734 +0000 UTC m=+1253.373385285" watchObservedRunningTime="2026-02-26 22:16:08.303023298 +0000 UTC m=+1253.382513839" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.312312 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qt2rz" event={"ID":"0ec61180-421b-4df0-8cd0-5cc207b4a179","Type":"ContainerStarted","Data":"5eec933bf249a6e0f0fc459720399f2243e9c87f01449546b32af204e08cef34"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.312382 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qt2rz" event={"ID":"0ec61180-421b-4df0-8cd0-5cc207b4a179","Type":"ContainerStarted","Data":"95f5b5cdb50205cb7e1d884d5a22909da5c288f18da9e0d5da59fa24f9fce77b"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.313409 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-78c5-account-create-update-cxn9r" podStartSLOduration=6.313394047 podStartE2EDuration="6.313394047s" podCreationTimestamp="2026-02-26 22:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:08.310564208 +0000 UTC m=+1253.390054749" watchObservedRunningTime="2026-02-26 22:16:08.313394047 +0000 UTC m=+1253.392884588" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.314358 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9792f" event={"ID":"06a47034-85d8-4a6d-983b-cf69ea88a122","Type":"ContainerStarted","Data":"8179d6c7541fc46696b54766751182e9dc2c4679515d9e7c306c64de7a6471ec"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.314432 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9792f" event={"ID":"06a47034-85d8-4a6d-983b-cf69ea88a122","Type":"ContainerStarted","Data":"1655e9a16f917be24d48cadd40eb72594cce2bb13c108220470c84ac78c06899"} Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.364375 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-qt2rz" podStartSLOduration=6.364350156 podStartE2EDuration="6.364350156s" podCreationTimestamp="2026-02-26 22:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:08.346886689 +0000 UTC m=+1253.426377240" watchObservedRunningTime="2026-02-26 22:16:08.364350156 +0000 UTC m=+1253.443840687" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.373178 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-9792f" podStartSLOduration=6.373147621 podStartE2EDuration="6.373147621s" podCreationTimestamp="2026-02-26 22:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:08.361736833 +0000 UTC m=+1253.441227384" watchObservedRunningTime="2026-02-26 22:16:08.373147621 +0000 UTC m=+1253.452638162" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.479661 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-btfqs"] Feb 26 22:16:08 crc kubenswrapper[4910]: E0226 22:16:08.480403 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d069cf-0b32-4927-ba6c-367ce8cfa0c5" containerName="oc" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.480431 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d069cf-0b32-4927-ba6c-367ce8cfa0c5" containerName="oc" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.480677 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d069cf-0b32-4927-ba6c-367ce8cfa0c5" containerName="oc" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.481716 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.484311 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.503636 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btfqs"] Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.521378 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r77c\" (UniqueName: \"kubernetes.io/projected/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-kube-api-access-9r77c\") pod \"root-account-create-update-btfqs\" (UID: \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\") " pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.521623 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-operator-scripts\") pod \"root-account-create-update-btfqs\" (UID: \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\") " pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.623973 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r77c\" (UniqueName: \"kubernetes.io/projected/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-kube-api-access-9r77c\") pod \"root-account-create-update-btfqs\" (UID: \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\") " pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.624109 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.624139 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-operator-scripts\") pod \"root-account-create-update-btfqs\" (UID: \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\") " pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:08 crc kubenswrapper[4910]: E0226 22:16:08.624250 4910 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 22:16:08 crc kubenswrapper[4910]: E0226 22:16:08.624326 4910 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 22:16:08 crc kubenswrapper[4910]: E0226 22:16:08.624384 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift podName:30b027eb-e942-4121-aebc-776d616b902e nodeName:}" failed. No retries permitted until 2026-02-26 22:16:12.624364866 +0000 UTC m=+1257.703855407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift") pod "swift-storage-0" (UID: "30b027eb-e942-4121-aebc-776d616b902e") : configmap "swift-ring-files" not found Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.625394 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-operator-scripts\") pod \"root-account-create-update-btfqs\" (UID: \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\") " pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.653750 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r77c\" (UniqueName: \"kubernetes.io/projected/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-kube-api-access-9r77c\") pod \"root-account-create-update-btfqs\" (UID: \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\") " pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.783044 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.787887 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xztls" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.828659 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e4d9b8-abdb-4f3c-8e33-334917a86288-operator-scripts\") pod \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\" (UID: \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\") " Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.829072 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mb9m\" (UniqueName: \"kubernetes.io/projected/d3e4d9b8-abdb-4f3c-8e33-334917a86288-kube-api-access-9mb9m\") pod \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\" (UID: \"d3e4d9b8-abdb-4f3c-8e33-334917a86288\") " Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.829585 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e4d9b8-abdb-4f3c-8e33-334917a86288-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3e4d9b8-abdb-4f3c-8e33-334917a86288" (UID: "d3e4d9b8-abdb-4f3c-8e33-334917a86288"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.829691 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e4d9b8-abdb-4f3c-8e33-334917a86288-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.842889 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e4d9b8-abdb-4f3c-8e33-334917a86288-kube-api-access-9mb9m" (OuterVolumeSpecName: "kube-api-access-9mb9m") pod "d3e4d9b8-abdb-4f3c-8e33-334917a86288" (UID: "d3e4d9b8-abdb-4f3c-8e33-334917a86288"). InnerVolumeSpecName "kube-api-access-9mb9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.866644 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535730-p75th"] Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.875978 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535730-p75th"] Feb 26 22:16:08 crc kubenswrapper[4910]: I0226 22:16:08.931513 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mb9m\" (UniqueName: \"kubernetes.io/projected/d3e4d9b8-abdb-4f3c-8e33-334917a86288-kube-api-access-9mb9m\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.015751 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.138052 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ca2c64-13ce-46ee-be2e-a54d05e5a626-operator-scripts\") pod \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\" (UID: \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\") " Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.138328 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b892z\" (UniqueName: \"kubernetes.io/projected/32ca2c64-13ce-46ee-be2e-a54d05e5a626-kube-api-access-b892z\") pod \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\" (UID: \"32ca2c64-13ce-46ee-be2e-a54d05e5a626\") " Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.138675 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ca2c64-13ce-46ee-be2e-a54d05e5a626-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32ca2c64-13ce-46ee-be2e-a54d05e5a626" (UID: "32ca2c64-13ce-46ee-be2e-a54d05e5a626"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.139215 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ca2c64-13ce-46ee-be2e-a54d05e5a626-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.149390 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ca2c64-13ce-46ee-be2e-a54d05e5a626-kube-api-access-b892z" (OuterVolumeSpecName: "kube-api-access-b892z") pod "32ca2c64-13ce-46ee-be2e-a54d05e5a626" (UID: "32ca2c64-13ce-46ee-be2e-a54d05e5a626"). InnerVolumeSpecName "kube-api-access-b892z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.241528 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b892z\" (UniqueName: \"kubernetes.io/projected/32ca2c64-13ce-46ee-be2e-a54d05e5a626-kube-api-access-b892z\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.326733 4910 generic.go:334] "Generic (PLEG): container finished" podID="e0775bf3-efcb-489c-acfb-5bd1ee95391a" containerID="eca0c21d5af38389244cca16895996204e1b81bf95c177256a87f37fe0e03d5b" exitCode=0 Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.327151 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-78c5-account-create-update-cxn9r" event={"ID":"e0775bf3-efcb-489c-acfb-5bd1ee95391a","Type":"ContainerDied","Data":"eca0c21d5af38389244cca16895996204e1b81bf95c177256a87f37fe0e03d5b"} Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.330530 4910 generic.go:334] "Generic (PLEG): container finished" podID="0ec61180-421b-4df0-8cd0-5cc207b4a179" containerID="5eec933bf249a6e0f0fc459720399f2243e9c87f01449546b32af204e08cef34" exitCode=0 Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.330570 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qt2rz" event={"ID":"0ec61180-421b-4df0-8cd0-5cc207b4a179","Type":"ContainerDied","Data":"5eec933bf249a6e0f0fc459720399f2243e9c87f01449546b32af204e08cef34"} Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.333340 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ddsmc" event={"ID":"334061f5-f54a-41b2-8c49-66695cb3639a","Type":"ContainerStarted","Data":"c82b84e523202d4f4ebefae6d6d49000f10cb6cfa7796a219109deba69223656"} Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.333598 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ddsmc" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.335739 4910 generic.go:334] "Generic (PLEG): container finished" podID="06a47034-85d8-4a6d-983b-cf69ea88a122" containerID="8179d6c7541fc46696b54766751182e9dc2c4679515d9e7c306c64de7a6471ec" exitCode=0 Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.335799 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9792f" event={"ID":"06a47034-85d8-4a6d-983b-cf69ea88a122","Type":"ContainerDied","Data":"8179d6c7541fc46696b54766751182e9dc2c4679515d9e7c306c64de7a6471ec"} Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.337475 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xztls" event={"ID":"d3e4d9b8-abdb-4f3c-8e33-334917a86288","Type":"ContainerDied","Data":"57bc770047323cfacb6795a5feaaa7605b4e8b3501b618e96fd6ebe85dad83b9"} Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.337488 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xztls" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.337500 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57bc770047323cfacb6795a5feaaa7605b4e8b3501b618e96fd6ebe85dad83b9" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.344765 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bf28-account-create-update-8q7x7" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.344872 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bf28-account-create-update-8q7x7" event={"ID":"32ca2c64-13ce-46ee-be2e-a54d05e5a626","Type":"ContainerDied","Data":"9f010bf683a28cec373549c1138842591a375469bb3f5d51e5e4b931619d0056"} Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.344902 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f010bf683a28cec373549c1138842591a375469bb3f5d51e5e4b931619d0056" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.350243 4910 generic.go:334] "Generic (PLEG): container finished" podID="2a08a89c-9d68-445e-bd75-be72757906a6" containerID="8c551a8eaa71eada9aceefbdce018e9a3105b37d91943ef1c1fbdb3fba511f8a" exitCode=0 Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.350290 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6328-account-create-update-r69jh" event={"ID":"2a08a89c-9d68-445e-bd75-be72757906a6","Type":"ContainerDied","Data":"8c551a8eaa71eada9aceefbdce018e9a3105b37d91943ef1c1fbdb3fba511f8a"} Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.364982 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btfqs"] Feb 26 22:16:09 crc kubenswrapper[4910]: W0226 22:16:09.396364 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46ba9a1b_f8df_4d13_83ae_9fa218e7c2a8.slice/crio-cfc316fcaf0064863cd0e7e8691971f8959f08277cbedb6cd23c7f6d3c5a827d WatchSource:0}: Error finding container cfc316fcaf0064863cd0e7e8691971f8959f08277cbedb6cd23c7f6d3c5a827d: Status 404 returned error can't find the container with id cfc316fcaf0064863cd0e7e8691971f8959f08277cbedb6cd23c7f6d3c5a827d Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.405748 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ddsmc" podStartSLOduration=20.274353906 podStartE2EDuration="42.405731216s" podCreationTimestamp="2026-02-26 22:15:27 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.643457659 +0000 UTC m=+1230.722948200" lastFinishedPulling="2026-02-26 22:16:07.774834949 +0000 UTC m=+1252.854325510" observedRunningTime="2026-02-26 22:16:09.396293203 +0000 UTC m=+1254.475783744" watchObservedRunningTime="2026-02-26 22:16:09.405731216 +0000 UTC m=+1254.485221757" Feb 26 22:16:09 crc kubenswrapper[4910]: I0226 22:16:09.915408 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72aeffcb-4b89-42c4-8be7-3ce49742f95d" path="/var/lib/kubelet/pods/72aeffcb-4b89-42c4-8be7-3ce49742f95d/volumes" Feb 26 22:16:10 crc kubenswrapper[4910]: I0226 22:16:10.360883 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btfqs" event={"ID":"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8","Type":"ContainerStarted","Data":"a7be97dc228f66ff13403d63e2d6838104a25b05e8ad17cff0b27145d2968f62"} Feb 26 22:16:10 crc kubenswrapper[4910]: I0226 22:16:10.360941 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btfqs" event={"ID":"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8","Type":"ContainerStarted","Data":"cfc316fcaf0064863cd0e7e8691971f8959f08277cbedb6cd23c7f6d3c5a827d"} Feb 26 22:16:10 crc kubenswrapper[4910]: I0226 22:16:10.362810 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" event={"ID":"c41a76f7-e78e-400a-b92b-aa690d360c6e","Type":"ContainerStarted","Data":"d4318395ca9aadf5be619088bb9cd59e9aad0db0e9c99d426e02524bda1dd46c"} Feb 26 22:16:10 crc kubenswrapper[4910]: I0226 22:16:10.385562 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-btfqs" podStartSLOduration=2.38550508 podStartE2EDuration="2.38550508s" podCreationTimestamp="2026-02-26 22:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:10.378507335 +0000 UTC m=+1255.457997886" watchObservedRunningTime="2026-02-26 22:16:10.38550508 +0000 UTC m=+1255.464995621" Feb 26 22:16:10 crc kubenswrapper[4910]: I0226 22:16:10.411990 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" podStartSLOduration=7.411966727 podStartE2EDuration="7.411966727s" podCreationTimestamp="2026-02-26 22:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:10.409231451 +0000 UTC m=+1255.488722002" watchObservedRunningTime="2026-02-26 22:16:10.411966727 +0000 UTC m=+1255.491457268" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.255513 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.269029 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.287995 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9792f" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.364219 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.376399 4910 generic.go:334] "Generic (PLEG): container finished" podID="46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8" containerID="a7be97dc228f66ff13403d63e2d6838104a25b05e8ad17cff0b27145d2968f62" exitCode=0 Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.376479 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btfqs" event={"ID":"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8","Type":"ContainerDied","Data":"a7be97dc228f66ff13403d63e2d6838104a25b05e8ad17cff0b27145d2968f62"} Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.382312 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6328-account-create-update-r69jh" event={"ID":"2a08a89c-9d68-445e-bd75-be72757906a6","Type":"ContainerDied","Data":"c8b277521500e898c58e1fa57550dd5c8e802a46a5c1549191ecd80997f12f68"} Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.382364 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b277521500e898c58e1fa57550dd5c8e802a46a5c1549191ecd80997f12f68" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.382368 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6328-account-create-update-r69jh" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.383251 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8clr\" (UniqueName: \"kubernetes.io/projected/e0775bf3-efcb-489c-acfb-5bd1ee95391a-kube-api-access-x8clr\") pod \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\" (UID: \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\") " Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.383340 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8vm5\" (UniqueName: \"kubernetes.io/projected/06a47034-85d8-4a6d-983b-cf69ea88a122-kube-api-access-p8vm5\") pod \"06a47034-85d8-4a6d-983b-cf69ea88a122\" (UID: \"06a47034-85d8-4a6d-983b-cf69ea88a122\") " Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.383388 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0775bf3-efcb-489c-acfb-5bd1ee95391a-operator-scripts\") pod \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\" (UID: \"e0775bf3-efcb-489c-acfb-5bd1ee95391a\") " Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.383494 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmwrf\" (UniqueName: \"kubernetes.io/projected/0ec61180-421b-4df0-8cd0-5cc207b4a179-kube-api-access-zmwrf\") pod \"0ec61180-421b-4df0-8cd0-5cc207b4a179\" (UID: \"0ec61180-421b-4df0-8cd0-5cc207b4a179\") " Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.383562 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ec61180-421b-4df0-8cd0-5cc207b4a179-operator-scripts\") pod \"0ec61180-421b-4df0-8cd0-5cc207b4a179\" (UID: \"0ec61180-421b-4df0-8cd0-5cc207b4a179\") " Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.383626 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47034-85d8-4a6d-983b-cf69ea88a122-operator-scripts\") pod \"06a47034-85d8-4a6d-983b-cf69ea88a122\" (UID: \"06a47034-85d8-4a6d-983b-cf69ea88a122\") " Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.384236 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0775bf3-efcb-489c-acfb-5bd1ee95391a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0775bf3-efcb-489c-acfb-5bd1ee95391a" (UID: "e0775bf3-efcb-489c-acfb-5bd1ee95391a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.386733 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec61180-421b-4df0-8cd0-5cc207b4a179-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ec61180-421b-4df0-8cd0-5cc207b4a179" (UID: "0ec61180-421b-4df0-8cd0-5cc207b4a179"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.386819 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a47034-85d8-4a6d-983b-cf69ea88a122-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06a47034-85d8-4a6d-983b-cf69ea88a122" (UID: "06a47034-85d8-4a6d-983b-cf69ea88a122"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.393864 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0775bf3-efcb-489c-acfb-5bd1ee95391a-kube-api-access-x8clr" (OuterVolumeSpecName: "kube-api-access-x8clr") pod "e0775bf3-efcb-489c-acfb-5bd1ee95391a" (UID: "e0775bf3-efcb-489c-acfb-5bd1ee95391a"). InnerVolumeSpecName "kube-api-access-x8clr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.395071 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a47034-85d8-4a6d-983b-cf69ea88a122-kube-api-access-p8vm5" (OuterVolumeSpecName: "kube-api-access-p8vm5") pod "06a47034-85d8-4a6d-983b-cf69ea88a122" (UID: "06a47034-85d8-4a6d-983b-cf69ea88a122"). InnerVolumeSpecName "kube-api-access-p8vm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.395438 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-78c5-account-create-update-cxn9r" event={"ID":"e0775bf3-efcb-489c-acfb-5bd1ee95391a","Type":"ContainerDied","Data":"1a8f5b4329a800548ef88968b99dfa552adc75f9fc4eb40b177d6d4b3f995973"} Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.395483 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a8f5b4329a800548ef88968b99dfa552adc75f9fc4eb40b177d6d4b3f995973" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.395548 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-78c5-account-create-update-cxn9r" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.401355 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec61180-421b-4df0-8cd0-5cc207b4a179-kube-api-access-zmwrf" (OuterVolumeSpecName: "kube-api-access-zmwrf") pod "0ec61180-421b-4df0-8cd0-5cc207b4a179" (UID: "0ec61180-421b-4df0-8cd0-5cc207b4a179"). InnerVolumeSpecName "kube-api-access-zmwrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.404010 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"60fb0251-1bd0-4e06-a368-5aceb0afaa87","Type":"ContainerStarted","Data":"f88cfbccb29c5da3ad43a3bed7e1ade0e19050b8e699a86498c859862a2e6e3f"} Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.404726 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.407333 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.427708 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qt2rz" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.427782 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qt2rz" event={"ID":"0ec61180-421b-4df0-8cd0-5cc207b4a179","Type":"ContainerDied","Data":"95f5b5cdb50205cb7e1d884d5a22909da5c288f18da9e0d5da59fa24f9fce77b"} Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.427828 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95f5b5cdb50205cb7e1d884d5a22909da5c288f18da9e0d5da59fa24f9fce77b" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.430483 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=23.863474517 podStartE2EDuration="47.430459719s" podCreationTimestamp="2026-02-26 22:15:24 +0000 UTC" firstStartedPulling="2026-02-26 22:15:44.243610043 +0000 UTC m=+1229.323100584" lastFinishedPulling="2026-02-26 22:16:07.810595245 +0000 UTC m=+1252.890085786" observedRunningTime="2026-02-26 22:16:11.424791751 +0000 UTC m=+1256.504282292" watchObservedRunningTime="2026-02-26 22:16:11.430459719 +0000 UTC m=+1256.509950270" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.436096 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9792f" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.439793 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9792f" event={"ID":"06a47034-85d8-4a6d-983b-cf69ea88a122","Type":"ContainerDied","Data":"1655e9a16f917be24d48cadd40eb72594cce2bb13c108220470c84ac78c06899"} Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.439839 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1655e9a16f917be24d48cadd40eb72594cce2bb13c108220470c84ac78c06899" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.439873 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.486372 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a08a89c-9d68-445e-bd75-be72757906a6-operator-scripts\") pod \"2a08a89c-9d68-445e-bd75-be72757906a6\" (UID: \"2a08a89c-9d68-445e-bd75-be72757906a6\") " Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.486599 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjwlq\" (UniqueName: \"kubernetes.io/projected/2a08a89c-9d68-445e-bd75-be72757906a6-kube-api-access-pjwlq\") pod \"2a08a89c-9d68-445e-bd75-be72757906a6\" (UID: \"2a08a89c-9d68-445e-bd75-be72757906a6\") " Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.487085 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47034-85d8-4a6d-983b-cf69ea88a122-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.487102 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8clr\" (UniqueName: \"kubernetes.io/projected/e0775bf3-efcb-489c-acfb-5bd1ee95391a-kube-api-access-x8clr\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.487114 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8vm5\" (UniqueName: \"kubernetes.io/projected/06a47034-85d8-4a6d-983b-cf69ea88a122-kube-api-access-p8vm5\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.487123 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0775bf3-efcb-489c-acfb-5bd1ee95391a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.487131 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmwrf\" (UniqueName: \"kubernetes.io/projected/0ec61180-421b-4df0-8cd0-5cc207b4a179-kube-api-access-zmwrf\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.487140 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ec61180-421b-4df0-8cd0-5cc207b4a179-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.487669 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a08a89c-9d68-445e-bd75-be72757906a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a08a89c-9d68-445e-bd75-be72757906a6" (UID: "2a08a89c-9d68-445e-bd75-be72757906a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.508349 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a08a89c-9d68-445e-bd75-be72757906a6-kube-api-access-pjwlq" (OuterVolumeSpecName: "kube-api-access-pjwlq") pod "2a08a89c-9d68-445e-bd75-be72757906a6" (UID: "2a08a89c-9d68-445e-bd75-be72757906a6"). InnerVolumeSpecName "kube-api-access-pjwlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.588763 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a08a89c-9d68-445e-bd75-be72757906a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.588817 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjwlq\" (UniqueName: \"kubernetes.io/projected/2a08a89c-9d68-445e-bd75-be72757906a6-kube-api-access-pjwlq\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.941780 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-pfjh4"] Feb 26 22:16:11 crc kubenswrapper[4910]: E0226 22:16:11.942271 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ca2c64-13ce-46ee-be2e-a54d05e5a626" containerName="mariadb-account-create-update" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942294 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ca2c64-13ce-46ee-be2e-a54d05e5a626" containerName="mariadb-account-create-update" Feb 26 22:16:11 crc kubenswrapper[4910]: E0226 22:16:11.942311 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e4d9b8-abdb-4f3c-8e33-334917a86288" containerName="mariadb-database-create" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942320 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e4d9b8-abdb-4f3c-8e33-334917a86288" containerName="mariadb-database-create" Feb 26 22:16:11 crc kubenswrapper[4910]: E0226 22:16:11.942348 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec61180-421b-4df0-8cd0-5cc207b4a179" containerName="mariadb-database-create" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942357 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec61180-421b-4df0-8cd0-5cc207b4a179" containerName="mariadb-database-create" Feb 26 22:16:11 crc kubenswrapper[4910]: E0226 22:16:11.942369 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0775bf3-efcb-489c-acfb-5bd1ee95391a" containerName="mariadb-account-create-update" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942376 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0775bf3-efcb-489c-acfb-5bd1ee95391a" containerName="mariadb-account-create-update" Feb 26 22:16:11 crc kubenswrapper[4910]: E0226 22:16:11.942398 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a08a89c-9d68-445e-bd75-be72757906a6" containerName="mariadb-account-create-update" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942406 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a08a89c-9d68-445e-bd75-be72757906a6" containerName="mariadb-account-create-update" Feb 26 22:16:11 crc kubenswrapper[4910]: E0226 22:16:11.942420 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a47034-85d8-4a6d-983b-cf69ea88a122" containerName="mariadb-database-create" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942428 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a47034-85d8-4a6d-983b-cf69ea88a122" containerName="mariadb-database-create" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942652 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ca2c64-13ce-46ee-be2e-a54d05e5a626" containerName="mariadb-account-create-update" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942669 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a47034-85d8-4a6d-983b-cf69ea88a122" containerName="mariadb-database-create" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942686 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a08a89c-9d68-445e-bd75-be72757906a6" containerName="mariadb-account-create-update" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942700 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0775bf3-efcb-489c-acfb-5bd1ee95391a" containerName="mariadb-account-create-update" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942712 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec61180-421b-4df0-8cd0-5cc207b4a179" containerName="mariadb-database-create" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.942722 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e4d9b8-abdb-4f3c-8e33-334917a86288" containerName="mariadb-database-create" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.943601 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.946420 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.950094 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wmnc7" Feb 26 22:16:11 crc kubenswrapper[4910]: I0226 22:16:11.957206 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-pfjh4"] Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.001495 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-config-data\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.001692 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-combined-ca-bundle\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.001807 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-db-sync-config-data\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.001837 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7rzh\" (UniqueName: \"kubernetes.io/projected/a10fff0b-5682-4806-82e0-0d19db3deae4-kube-api-access-w7rzh\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.103329 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-db-sync-config-data\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.103378 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7rzh\" (UniqueName: \"kubernetes.io/projected/a10fff0b-5682-4806-82e0-0d19db3deae4-kube-api-access-w7rzh\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.103454 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-config-data\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.103582 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-combined-ca-bundle\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.107474 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-db-sync-config-data\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.107850 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-combined-ca-bundle\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.119317 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-config-data\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.119811 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7rzh\" (UniqueName: \"kubernetes.io/projected/a10fff0b-5682-4806-82e0-0d19db3deae4-kube-api-access-w7rzh\") pod \"glance-db-sync-pfjh4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.260861 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.550949 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zps74" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.637385 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-5lzsg" Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.649347 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:12 crc kubenswrapper[4910]: E0226 22:16:12.649848 4910 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 22:16:12 crc kubenswrapper[4910]: E0226 22:16:12.649871 4910 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 22:16:12 crc kubenswrapper[4910]: E0226 22:16:12.649926 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift podName:30b027eb-e942-4121-aebc-776d616b902e nodeName:}" failed. No retries permitted until 2026-02-26 22:16:20.649913248 +0000 UTC m=+1265.729403789 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift") pod "swift-storage-0" (UID: "30b027eb-e942-4121-aebc-776d616b902e") : configmap "swift-ring-files" not found Feb 26 22:16:12 crc kubenswrapper[4910]: I0226 22:16:12.755285 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc" Feb 26 22:16:13 crc kubenswrapper[4910]: I0226 22:16:13.660387 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3e40b05d-8071-4f6b-b2ab-160931200e8a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 22:16:13 crc kubenswrapper[4910]: I0226 22:16:13.777671 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 22:16:14 crc kubenswrapper[4910]: I0226 22:16:14.004497 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 22:16:14 crc kubenswrapper[4910]: I0226 22:16:14.080325 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:16:14 crc kubenswrapper[4910]: I0226 22:16:14.149505 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pqvdk"] Feb 26 22:16:14 crc kubenswrapper[4910]: I0226 22:16:14.149776 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-pqvdk" podUID="564095ae-5c59-4307-8c7e-87c20aed1b59" containerName="dnsmasq-dns" containerID="cri-o://ff1530ecd2f227532ae5f2b6ed55cfcbdc67cc82d5c4263e3d1fe24136548a10" gracePeriod=10 Feb 26 22:16:14 crc kubenswrapper[4910]: I0226 22:16:14.458781 4910 generic.go:334] "Generic (PLEG): container finished" podID="564095ae-5c59-4307-8c7e-87c20aed1b59" containerID="ff1530ecd2f227532ae5f2b6ed55cfcbdc67cc82d5c4263e3d1fe24136548a10" exitCode=0 Feb 26 22:16:14 crc kubenswrapper[4910]: I0226 22:16:14.458822 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pqvdk" event={"ID":"564095ae-5c59-4307-8c7e-87c20aed1b59","Type":"ContainerDied","Data":"ff1530ecd2f227532ae5f2b6ed55cfcbdc67cc82d5c4263e3d1fe24136548a10"} Feb 26 22:16:16 crc kubenswrapper[4910]: I0226 22:16:16.936792 4910 scope.go:117] "RemoveContainer" containerID="fb116b15d2b2cca3b1a55cde8eb00390b3eb4a2e1784f04d2cafb2a52daab63a" Feb 26 22:16:17 crc kubenswrapper[4910]: I0226 22:16:17.314283 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-pqvdk" podUID="564095ae-5c59-4307-8c7e-87c20aed1b59" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.502863 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btfqs" event={"ID":"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8","Type":"ContainerDied","Data":"cfc316fcaf0064863cd0e7e8691971f8959f08277cbedb6cd23c7f6d3c5a827d"} Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.502911 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfc316fcaf0064863cd0e7e8691971f8959f08277cbedb6cd23c7f6d3c5a827d" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.509971 4910 generic.go:334] "Generic (PLEG): container finished" podID="48cec592-3a36-46fc-813d-bf8fa5212e89" containerID="d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70" exitCode=0 Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.510018 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"48cec592-3a36-46fc-813d-bf8fa5212e89","Type":"ContainerDied","Data":"d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70"} Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.598015 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.682986 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r77c\" (UniqueName: \"kubernetes.io/projected/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-kube-api-access-9r77c\") pod \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\" (UID: \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\") " Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.683349 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-operator-scripts\") pod \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\" (UID: \"46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8\") " Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.683988 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8" (UID: "46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.692679 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-kube-api-access-9r77c" (OuterVolumeSpecName: "kube-api-access-9r77c") pod "46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8" (UID: "46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8"). InnerVolumeSpecName "kube-api-access-9r77c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.725384 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.784778 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj49p\" (UniqueName: \"kubernetes.io/projected/564095ae-5c59-4307-8c7e-87c20aed1b59-kube-api-access-lj49p\") pod \"564095ae-5c59-4307-8c7e-87c20aed1b59\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.784865 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-nb\") pod \"564095ae-5c59-4307-8c7e-87c20aed1b59\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.784938 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-config\") pod \"564095ae-5c59-4307-8c7e-87c20aed1b59\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.784979 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-sb\") pod \"564095ae-5c59-4307-8c7e-87c20aed1b59\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.785065 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-dns-svc\") pod \"564095ae-5c59-4307-8c7e-87c20aed1b59\" (UID: \"564095ae-5c59-4307-8c7e-87c20aed1b59\") " Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.785748 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r77c\" (UniqueName: \"kubernetes.io/projected/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-kube-api-access-9r77c\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.785768 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.790409 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564095ae-5c59-4307-8c7e-87c20aed1b59-kube-api-access-lj49p" (OuterVolumeSpecName: "kube-api-access-lj49p") pod "564095ae-5c59-4307-8c7e-87c20aed1b59" (UID: "564095ae-5c59-4307-8c7e-87c20aed1b59"). InnerVolumeSpecName "kube-api-access-lj49p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.846729 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-config" (OuterVolumeSpecName: "config") pod "564095ae-5c59-4307-8c7e-87c20aed1b59" (UID: "564095ae-5c59-4307-8c7e-87c20aed1b59"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.851765 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "564095ae-5c59-4307-8c7e-87c20aed1b59" (UID: "564095ae-5c59-4307-8c7e-87c20aed1b59"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.864210 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "564095ae-5c59-4307-8c7e-87c20aed1b59" (UID: "564095ae-5c59-4307-8c7e-87c20aed1b59"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.872573 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "564095ae-5c59-4307-8c7e-87c20aed1b59" (UID: "564095ae-5c59-4307-8c7e-87c20aed1b59"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.887121 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj49p\" (UniqueName: \"kubernetes.io/projected/564095ae-5c59-4307-8c7e-87c20aed1b59-kube-api-access-lj49p\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.887153 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.887225 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.887233 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:18 crc kubenswrapper[4910]: I0226 22:16:18.887241 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/564095ae-5c59-4307-8c7e-87c20aed1b59-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.064032 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-pfjh4"] Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.072872 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.522899 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cgwcw" event={"ID":"1d4bb0af-a11e-4f9f-a420-fc07f0220b10","Type":"ContainerStarted","Data":"ec9b869beab8ef974b02ee9458823b208a2e3dcdc4347c6589d71014f5755db6"} Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.524509 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pfjh4" event={"ID":"a10fff0b-5682-4806-82e0-0d19db3deae4","Type":"ContainerStarted","Data":"330a915f26e05b0eb5934736862aabc81209bd4a1b901b93cedf410f8052ce04"} Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.526878 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerStarted","Data":"dc36ca0064009a273deeee99f6ddba18fa15369d2698389d7c803044ccba12cf"} Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.529665 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pqvdk" event={"ID":"564095ae-5c59-4307-8c7e-87c20aed1b59","Type":"ContainerDied","Data":"f59bc4a3572254378d1dfdf42e09999b82c5120c286e1e7604ba290e5b142c09"} Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.529721 4910 scope.go:117] "RemoveContainer" containerID="ff1530ecd2f227532ae5f2b6ed55cfcbdc67cc82d5c4263e3d1fe24136548a10" Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.529830 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pqvdk" Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.537064 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"48cec592-3a36-46fc-813d-bf8fa5212e89","Type":"ContainerStarted","Data":"ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7"} Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.537091 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btfqs" Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.537379 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.545848 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cgwcw" podStartSLOduration=3.830909977 podStartE2EDuration="14.545828562s" podCreationTimestamp="2026-02-26 22:16:05 +0000 UTC" firstStartedPulling="2026-02-26 22:16:07.764897712 +0000 UTC m=+1252.844388253" lastFinishedPulling="2026-02-26 22:16:18.479816287 +0000 UTC m=+1263.559306838" observedRunningTime="2026-02-26 22:16:19.541818711 +0000 UTC m=+1264.621309252" watchObservedRunningTime="2026-02-26 22:16:19.545828562 +0000 UTC m=+1264.625319093" Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.559606 4910 scope.go:117] "RemoveContainer" containerID="55834ab8d02684cb55366055220a07141f292edbd43dd6801e345ac6fde1b022" Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.584453 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.776011909 podStartE2EDuration="1m2.584426837s" podCreationTimestamp="2026-02-26 22:15:17 +0000 UTC" firstStartedPulling="2026-02-26 22:15:19.056833073 +0000 UTC m=+1204.136323614" lastFinishedPulling="2026-02-26 22:15:38.865247991 +0000 UTC m=+1223.944738542" observedRunningTime="2026-02-26 22:16:19.575490609 +0000 UTC m=+1264.654981150" watchObservedRunningTime="2026-02-26 22:16:19.584426837 +0000 UTC m=+1264.663917378" Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.630232 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pqvdk"] Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.642756 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pqvdk"] Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.856350 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-btfqs"] Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.862861 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-btfqs"] Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.928019 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8" path="/var/lib/kubelet/pods/46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8/volumes" Feb 26 22:16:19 crc kubenswrapper[4910]: I0226 22:16:19.928540 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564095ae-5c59-4307-8c7e-87c20aed1b59" path="/var/lib/kubelet/pods/564095ae-5c59-4307-8c7e-87c20aed1b59/volumes" Feb 26 22:16:20 crc kubenswrapper[4910]: I0226 22:16:20.357146 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 26 22:16:20 crc kubenswrapper[4910]: I0226 22:16:20.725242 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:20 crc kubenswrapper[4910]: E0226 22:16:20.725482 4910 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 22:16:20 crc kubenswrapper[4910]: E0226 22:16:20.725496 4910 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 22:16:20 crc kubenswrapper[4910]: E0226 22:16:20.725556 4910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift podName:30b027eb-e942-4121-aebc-776d616b902e nodeName:}" failed. No retries permitted until 2026-02-26 22:16:36.725541345 +0000 UTC m=+1281.805031876 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift") pod "swift-storage-0" (UID: "30b027eb-e942-4121-aebc-776d616b902e") : configmap "swift-ring-files" not found Feb 26 22:16:21 crc kubenswrapper[4910]: I0226 22:16:21.561922 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerStarted","Data":"ba9e28ab288bac4d0a62f2effca6d14bea90aeac5171ed430bdfcac17a5407be"} Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.503336 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sdx7z"] Feb 26 22:16:23 crc kubenswrapper[4910]: E0226 22:16:23.505719 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564095ae-5c59-4307-8c7e-87c20aed1b59" containerName="dnsmasq-dns" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.505742 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="564095ae-5c59-4307-8c7e-87c20aed1b59" containerName="dnsmasq-dns" Feb 26 22:16:23 crc kubenswrapper[4910]: E0226 22:16:23.505771 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564095ae-5c59-4307-8c7e-87c20aed1b59" containerName="init" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.505778 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="564095ae-5c59-4307-8c7e-87c20aed1b59" containerName="init" Feb 26 22:16:23 crc kubenswrapper[4910]: E0226 22:16:23.505796 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8" containerName="mariadb-account-create-update" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.505803 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8" containerName="mariadb-account-create-update" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.505984 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ba9a1b-f8df-4d13-83ae-9fa218e7c2a8" containerName="mariadb-account-create-update" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.506000 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="564095ae-5c59-4307-8c7e-87c20aed1b59" containerName="dnsmasq-dns" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.506713 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.508925 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.516026 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sdx7z"] Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.588301 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kzt\" (UniqueName: \"kubernetes.io/projected/6113e4f6-f02a-4810-a3f2-74245b547b37-kube-api-access-r7kzt\") pod \"root-account-create-update-sdx7z\" (UID: \"6113e4f6-f02a-4810-a3f2-74245b547b37\") " pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.588371 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6113e4f6-f02a-4810-a3f2-74245b547b37-operator-scripts\") pod \"root-account-create-update-sdx7z\" (UID: \"6113e4f6-f02a-4810-a3f2-74245b547b37\") " pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.662241 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3e40b05d-8071-4f6b-b2ab-160931200e8a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.691336 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7kzt\" (UniqueName: \"kubernetes.io/projected/6113e4f6-f02a-4810-a3f2-74245b547b37-kube-api-access-r7kzt\") pod \"root-account-create-update-sdx7z\" (UID: \"6113e4f6-f02a-4810-a3f2-74245b547b37\") " pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.691431 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6113e4f6-f02a-4810-a3f2-74245b547b37-operator-scripts\") pod \"root-account-create-update-sdx7z\" (UID: \"6113e4f6-f02a-4810-a3f2-74245b547b37\") " pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.742471 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6113e4f6-f02a-4810-a3f2-74245b547b37-operator-scripts\") pod \"root-account-create-update-sdx7z\" (UID: \"6113e4f6-f02a-4810-a3f2-74245b547b37\") " pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.750775 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7kzt\" (UniqueName: \"kubernetes.io/projected/6113e4f6-f02a-4810-a3f2-74245b547b37-kube-api-access-r7kzt\") pod \"root-account-create-update-sdx7z\" (UID: \"6113e4f6-f02a-4810-a3f2-74245b547b37\") " pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:23 crc kubenswrapper[4910]: I0226 22:16:23.836325 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:24 crc kubenswrapper[4910]: I0226 22:16:24.593676 4910 generic.go:334] "Generic (PLEG): container finished" podID="f98f3d3a-39ee-4b35-8653-ae334df58fca" containerID="b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f" exitCode=0 Feb 26 22:16:24 crc kubenswrapper[4910]: I0226 22:16:24.593724 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f98f3d3a-39ee-4b35-8653-ae334df58fca","Type":"ContainerDied","Data":"b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f"} Feb 26 22:16:25 crc kubenswrapper[4910]: I0226 22:16:25.611600 4910 generic.go:334] "Generic (PLEG): container finished" podID="1d4bb0af-a11e-4f9f-a420-fc07f0220b10" containerID="ec9b869beab8ef974b02ee9458823b208a2e3dcdc4347c6589d71014f5755db6" exitCode=0 Feb 26 22:16:25 crc kubenswrapper[4910]: I0226 22:16:25.611904 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cgwcw" event={"ID":"1d4bb0af-a11e-4f9f-a420-fc07f0220b10","Type":"ContainerDied","Data":"ec9b869beab8ef974b02ee9458823b208a2e3dcdc4347c6589d71014f5755db6"} Feb 26 22:16:28 crc kubenswrapper[4910]: I0226 22:16:28.483456 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 22:16:28 crc kubenswrapper[4910]: I0226 22:16:28.848970 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-d7gr8"] Feb 26 22:16:28 crc kubenswrapper[4910]: I0226 22:16:28.850549 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:28 crc kubenswrapper[4910]: I0226 22:16:28.871401 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-d7gr8"] Feb 26 22:16:28 crc kubenswrapper[4910]: I0226 22:16:28.921258 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-operator-scripts\") pod \"cloudkitty-db-create-d7gr8\" (UID: \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\") " pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:28 crc kubenswrapper[4910]: I0226 22:16:28.921572 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjvvf\" (UniqueName: \"kubernetes.io/projected/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-kube-api-access-bjvvf\") pod \"cloudkitty-db-create-d7gr8\" (UID: \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\") " pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:28 crc kubenswrapper[4910]: I0226 22:16:28.948368 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dc6mx"] Feb 26 22:16:28 crc kubenswrapper[4910]: I0226 22:16:28.950001 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:28 crc kubenswrapper[4910]: I0226 22:16:28.964554 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dc6mx"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.024719 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa07e8fd-4975-4e27-9b98-ec23e75b271d-operator-scripts\") pod \"cinder-db-create-dc6mx\" (UID: \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\") " pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.024810 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-operator-scripts\") pod \"cloudkitty-db-create-d7gr8\" (UID: \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\") " pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.024976 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqxw\" (UniqueName: \"kubernetes.io/projected/aa07e8fd-4975-4e27-9b98-ec23e75b271d-kube-api-access-vkqxw\") pod \"cinder-db-create-dc6mx\" (UID: \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\") " pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.025085 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvvf\" (UniqueName: \"kubernetes.io/projected/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-kube-api-access-bjvvf\") pod \"cloudkitty-db-create-d7gr8\" (UID: \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\") " pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.025593 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-operator-scripts\") pod \"cloudkitty-db-create-d7gr8\" (UID: \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\") " pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.053757 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvvf\" (UniqueName: \"kubernetes.io/projected/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-kube-api-access-bjvvf\") pod \"cloudkitty-db-create-d7gr8\" (UID: \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\") " pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.067920 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-10a3-account-create-update-slfn2"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.073430 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.082706 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-10a3-account-create-update-slfn2"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.086155 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.127280 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa07e8fd-4975-4e27-9b98-ec23e75b271d-operator-scripts\") pod \"cinder-db-create-dc6mx\" (UID: \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\") " pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.127355 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqwrg\" (UniqueName: \"kubernetes.io/projected/52ce7064-53c4-4861-a60a-996a62f24e55-kube-api-access-xqwrg\") pod \"cinder-10a3-account-create-update-slfn2\" (UID: \"52ce7064-53c4-4861-a60a-996a62f24e55\") " pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.127404 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqxw\" (UniqueName: \"kubernetes.io/projected/aa07e8fd-4975-4e27-9b98-ec23e75b271d-kube-api-access-vkqxw\") pod \"cinder-db-create-dc6mx\" (UID: \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\") " pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.127502 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ce7064-53c4-4861-a60a-996a62f24e55-operator-scripts\") pod \"cinder-10a3-account-create-update-slfn2\" (UID: \"52ce7064-53c4-4861-a60a-996a62f24e55\") " pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.128280 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa07e8fd-4975-4e27-9b98-ec23e75b271d-operator-scripts\") pod \"cinder-db-create-dc6mx\" (UID: \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\") " pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.148329 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqxw\" (UniqueName: \"kubernetes.io/projected/aa07e8fd-4975-4e27-9b98-ec23e75b271d-kube-api-access-vkqxw\") pod \"cinder-db-create-dc6mx\" (UID: \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\") " pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.229627 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqwrg\" (UniqueName: \"kubernetes.io/projected/52ce7064-53c4-4861-a60a-996a62f24e55-kube-api-access-xqwrg\") pod \"cinder-10a3-account-create-update-slfn2\" (UID: \"52ce7064-53c4-4861-a60a-996a62f24e55\") " pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.229775 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ce7064-53c4-4861-a60a-996a62f24e55-operator-scripts\") pod \"cinder-10a3-account-create-update-slfn2\" (UID: \"52ce7064-53c4-4861-a60a-996a62f24e55\") " pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.230565 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ce7064-53c4-4861-a60a-996a62f24e55-operator-scripts\") pod \"cinder-10a3-account-create-update-slfn2\" (UID: \"52ce7064-53c4-4861-a60a-996a62f24e55\") " pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.232939 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-szq7t"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.233124 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.242170 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.245844 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.246016 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.246234 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.249050 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r29m2" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.269667 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.283519 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-szq7t"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.288046 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqwrg\" (UniqueName: \"kubernetes.io/projected/52ce7064-53c4-4861-a60a-996a62f24e55-kube-api-access-xqwrg\") pod \"cinder-10a3-account-create-update-slfn2\" (UID: \"52ce7064-53c4-4861-a60a-996a62f24e55\") " pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.302898 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-5c27-account-create-update-xtbjg"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.307772 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.319229 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.331538 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf9jm\" (UniqueName: \"kubernetes.io/projected/0230e94e-a757-4c5f-afed-0f4d1e769f7a-kube-api-access-bf9jm\") pod \"keystone-db-sync-szq7t\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.331622 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-config-data\") pod \"keystone-db-sync-szq7t\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.331674 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-combined-ca-bundle\") pod \"keystone-db-sync-szq7t\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.335051 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-5c27-account-create-update-xtbjg"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.343069 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-k7hrg"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.344519 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.358216 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0f00-account-create-update-6lkbt"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.365465 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.369065 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.380343 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k7hrg"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.399280 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0f00-account-create-update-6lkbt"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.425938 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.435099 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1301e33-c3d6-405b-8762-41744119af4d-operator-scripts\") pod \"cloudkitty-5c27-account-create-update-xtbjg\" (UID: \"c1301e33-c3d6-405b-8762-41744119af4d\") " pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.435195 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf9jm\" (UniqueName: \"kubernetes.io/projected/0230e94e-a757-4c5f-afed-0f4d1e769f7a-kube-api-access-bf9jm\") pod \"keystone-db-sync-szq7t\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.435236 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbvl\" (UniqueName: \"kubernetes.io/projected/cda59548-ac28-4988-88a5-f8770ab9c914-kube-api-access-5wbvl\") pod \"barbican-0f00-account-create-update-6lkbt\" (UID: \"cda59548-ac28-4988-88a5-f8770ab9c914\") " pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.435654 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-config-data\") pod \"keystone-db-sync-szq7t\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.435720 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87aa985f-3f4a-459d-b6be-6291e21c20c8-operator-scripts\") pod \"barbican-db-create-k7hrg\" (UID: \"87aa985f-3f4a-459d-b6be-6291e21c20c8\") " pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.435784 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda59548-ac28-4988-88a5-f8770ab9c914-operator-scripts\") pod \"barbican-0f00-account-create-update-6lkbt\" (UID: \"cda59548-ac28-4988-88a5-f8770ab9c914\") " pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.436041 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mvd7\" (UniqueName: \"kubernetes.io/projected/c1301e33-c3d6-405b-8762-41744119af4d-kube-api-access-2mvd7\") pod \"cloudkitty-5c27-account-create-update-xtbjg\" (UID: \"c1301e33-c3d6-405b-8762-41744119af4d\") " pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.436134 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-combined-ca-bundle\") pod \"keystone-db-sync-szq7t\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.436649 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwqlp\" (UniqueName: \"kubernetes.io/projected/87aa985f-3f4a-459d-b6be-6291e21c20c8-kube-api-access-pwqlp\") pod \"barbican-db-create-k7hrg\" (UID: \"87aa985f-3f4a-459d-b6be-6291e21c20c8\") " pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.440893 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-config-data\") pod \"keystone-db-sync-szq7t\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.448134 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-combined-ca-bundle\") pod \"keystone-db-sync-szq7t\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.471185 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf9jm\" (UniqueName: \"kubernetes.io/projected/0230e94e-a757-4c5f-afed-0f4d1e769f7a-kube-api-access-bf9jm\") pod \"keystone-db-sync-szq7t\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.539909 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1301e33-c3d6-405b-8762-41744119af4d-operator-scripts\") pod \"cloudkitty-5c27-account-create-update-xtbjg\" (UID: \"c1301e33-c3d6-405b-8762-41744119af4d\") " pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.540893 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbvl\" (UniqueName: \"kubernetes.io/projected/cda59548-ac28-4988-88a5-f8770ab9c914-kube-api-access-5wbvl\") pod \"barbican-0f00-account-create-update-6lkbt\" (UID: \"cda59548-ac28-4988-88a5-f8770ab9c914\") " pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.540994 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87aa985f-3f4a-459d-b6be-6291e21c20c8-operator-scripts\") pod \"barbican-db-create-k7hrg\" (UID: \"87aa985f-3f4a-459d-b6be-6291e21c20c8\") " pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.541066 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda59548-ac28-4988-88a5-f8770ab9c914-operator-scripts\") pod \"barbican-0f00-account-create-update-6lkbt\" (UID: \"cda59548-ac28-4988-88a5-f8770ab9c914\") " pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.541149 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mvd7\" (UniqueName: \"kubernetes.io/projected/c1301e33-c3d6-405b-8762-41744119af4d-kube-api-access-2mvd7\") pod \"cloudkitty-5c27-account-create-update-xtbjg\" (UID: \"c1301e33-c3d6-405b-8762-41744119af4d\") " pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.541304 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwqlp\" (UniqueName: \"kubernetes.io/projected/87aa985f-3f4a-459d-b6be-6291e21c20c8-kube-api-access-pwqlp\") pod \"barbican-db-create-k7hrg\" (UID: \"87aa985f-3f4a-459d-b6be-6291e21c20c8\") " pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.540765 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1301e33-c3d6-405b-8762-41744119af4d-operator-scripts\") pod \"cloudkitty-5c27-account-create-update-xtbjg\" (UID: \"c1301e33-c3d6-405b-8762-41744119af4d\") " pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.542605 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87aa985f-3f4a-459d-b6be-6291e21c20c8-operator-scripts\") pod \"barbican-db-create-k7hrg\" (UID: \"87aa985f-3f4a-459d-b6be-6291e21c20c8\") " pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.543035 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda59548-ac28-4988-88a5-f8770ab9c914-operator-scripts\") pod \"barbican-0f00-account-create-update-6lkbt\" (UID: \"cda59548-ac28-4988-88a5-f8770ab9c914\") " pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.551884 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-w8qxv"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.553996 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.563583 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.567064 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mvd7\" (UniqueName: \"kubernetes.io/projected/c1301e33-c3d6-405b-8762-41744119af4d-kube-api-access-2mvd7\") pod \"cloudkitty-5c27-account-create-update-xtbjg\" (UID: \"c1301e33-c3d6-405b-8762-41744119af4d\") " pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.577152 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwqlp\" (UniqueName: \"kubernetes.io/projected/87aa985f-3f4a-459d-b6be-6291e21c20c8-kube-api-access-pwqlp\") pod \"barbican-db-create-k7hrg\" (UID: \"87aa985f-3f4a-459d-b6be-6291e21c20c8\") " pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.584309 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbvl\" (UniqueName: \"kubernetes.io/projected/cda59548-ac28-4988-88a5-f8770ab9c914-kube-api-access-5wbvl\") pod \"barbican-0f00-account-create-update-6lkbt\" (UID: \"cda59548-ac28-4988-88a5-f8770ab9c914\") " pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.584367 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w8qxv"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.607048 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-711e-account-create-update-dnbzz"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.608386 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.610460 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.630445 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-711e-account-create-update-dnbzz"] Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.641246 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.642738 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-operator-scripts\") pod \"neutron-db-create-w8qxv\" (UID: \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\") " pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.642789 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7pmk\" (UniqueName: \"kubernetes.io/projected/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-kube-api-access-p7pmk\") pod \"neutron-db-create-w8qxv\" (UID: \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\") " pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.669707 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.701908 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.744861 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-operator-scripts\") pod \"neutron-db-create-w8qxv\" (UID: \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\") " pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.744934 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7pmk\" (UniqueName: \"kubernetes.io/projected/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-kube-api-access-p7pmk\") pod \"neutron-db-create-w8qxv\" (UID: \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\") " pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.745007 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-operator-scripts\") pod \"neutron-711e-account-create-update-dnbzz\" (UID: \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\") " pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.745050 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6vsx\" (UniqueName: \"kubernetes.io/projected/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-kube-api-access-n6vsx\") pod \"neutron-711e-account-create-update-dnbzz\" (UID: \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\") " pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.745671 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-operator-scripts\") pod \"neutron-db-create-w8qxv\" (UID: \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\") " pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.764678 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7pmk\" (UniqueName: \"kubernetes.io/projected/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-kube-api-access-p7pmk\") pod \"neutron-db-create-w8qxv\" (UID: \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\") " pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.846821 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-operator-scripts\") pod \"neutron-711e-account-create-update-dnbzz\" (UID: \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\") " pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.846884 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6vsx\" (UniqueName: \"kubernetes.io/projected/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-kube-api-access-n6vsx\") pod \"neutron-711e-account-create-update-dnbzz\" (UID: \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\") " pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.848496 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-operator-scripts\") pod \"neutron-711e-account-create-update-dnbzz\" (UID: \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\") " pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.864272 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6vsx\" (UniqueName: \"kubernetes.io/projected/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-kube-api-access-n6vsx\") pod \"neutron-711e-account-create-update-dnbzz\" (UID: \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\") " pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.879463 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:29 crc kubenswrapper[4910]: I0226 22:16:29.930820 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.686748 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cgwcw" event={"ID":"1d4bb0af-a11e-4f9f-a420-fc07f0220b10","Type":"ContainerDied","Data":"777486c1abeb992e2c7bbda06afe72ad512c0e30dbbfede86d14f172e0ef2af4"} Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.687364 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="777486c1abeb992e2c7bbda06afe72ad512c0e30dbbfede86d14f172e0ef2af4" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.748412 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.816722 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-scripts\") pod \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.816782 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-swiftconf\") pod \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.816812 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-etc-swift\") pod \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.816838 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-dispersionconf\") pod \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.816932 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpdg7\" (UniqueName: \"kubernetes.io/projected/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-kube-api-access-fpdg7\") pod \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.816985 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-ring-data-devices\") pod \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.817096 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-combined-ca-bundle\") pod \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\" (UID: \"1d4bb0af-a11e-4f9f-a420-fc07f0220b10\") " Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.818511 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1d4bb0af-a11e-4f9f-a420-fc07f0220b10" (UID: "1d4bb0af-a11e-4f9f-a420-fc07f0220b10"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.819735 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1d4bb0af-a11e-4f9f-a420-fc07f0220b10" (UID: "1d4bb0af-a11e-4f9f-a420-fc07f0220b10"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.823517 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-kube-api-access-fpdg7" (OuterVolumeSpecName: "kube-api-access-fpdg7") pod "1d4bb0af-a11e-4f9f-a420-fc07f0220b10" (UID: "1d4bb0af-a11e-4f9f-a420-fc07f0220b10"). InnerVolumeSpecName "kube-api-access-fpdg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.827148 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1d4bb0af-a11e-4f9f-a420-fc07f0220b10" (UID: "1d4bb0af-a11e-4f9f-a420-fc07f0220b10"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.847269 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1d4bb0af-a11e-4f9f-a420-fc07f0220b10" (UID: "1d4bb0af-a11e-4f9f-a420-fc07f0220b10"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.849622 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d4bb0af-a11e-4f9f-a420-fc07f0220b10" (UID: "1d4bb0af-a11e-4f9f-a420-fc07f0220b10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.855007 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-scripts" (OuterVolumeSpecName: "scripts") pod "1d4bb0af-a11e-4f9f-a420-fc07f0220b10" (UID: "1d4bb0af-a11e-4f9f-a420-fc07f0220b10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.928230 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.928569 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.928584 4910 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.928595 4910 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.928606 4910 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.928618 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpdg7\" (UniqueName: \"kubernetes.io/projected/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-kube-api-access-fpdg7\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:32 crc kubenswrapper[4910]: I0226 22:16:32.928635 4910 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1d4bb0af-a11e-4f9f-a420-fc07f0220b10-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.274502 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.298783 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6tz7l" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.387139 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sdx7z"] Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.538637 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ddsmc-config-z42tc"] Feb 26 22:16:33 crc kubenswrapper[4910]: E0226 22:16:33.539359 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4bb0af-a11e-4f9f-a420-fc07f0220b10" containerName="swift-ring-rebalance" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.539374 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4bb0af-a11e-4f9f-a420-fc07f0220b10" containerName="swift-ring-rebalance" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.539612 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4bb0af-a11e-4f9f-a420-fc07f0220b10" containerName="swift-ring-rebalance" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.541803 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.550904 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.560532 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ddsmc-config-z42tc"] Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.661211 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3e40b05d-8071-4f6b-b2ab-160931200e8a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.677628 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czp9f\" (UniqueName: \"kubernetes.io/projected/b682df9a-0ce1-463f-94a2-821085d209f4-kube-api-access-czp9f\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.677787 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-log-ovn\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.677810 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-scripts\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.677843 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run-ovn\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.677859 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-additional-scripts\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.677923 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.699812 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f98f3d3a-39ee-4b35-8653-ae334df58fca","Type":"ContainerStarted","Data":"d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3"} Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.699998 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.701555 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sdx7z" event={"ID":"6113e4f6-f02a-4810-a3f2-74245b547b37","Type":"ContainerStarted","Data":"bd68f29d52f32a5d4c24e680695a8aebb59a743998602c5a4f288558399a49af"} Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.701581 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sdx7z" event={"ID":"6113e4f6-f02a-4810-a3f2-74245b547b37","Type":"ContainerStarted","Data":"34825abf7425b7e2650411fb9beb7171718108881e21fbda575b13d916f79183"} Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.705703 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerStarted","Data":"1a437785b48de08d2d07c586be980cc51f97eeb0319b64417c9d1493655a4786"} Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.705748 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cgwcw" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.725946 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371960.128845 podStartE2EDuration="1m16.725929903s" podCreationTimestamp="2026-02-26 22:15:17 +0000 UTC" firstStartedPulling="2026-02-26 22:15:19.319238457 +0000 UTC m=+1204.398729008" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:33.722172008 +0000 UTC m=+1278.801662559" watchObservedRunningTime="2026-02-26 22:16:33.725929903 +0000 UTC m=+1278.805420444" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.737513 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-sdx7z" podStartSLOduration=10.737499045 podStartE2EDuration="10.737499045s" podCreationTimestamp="2026-02-26 22:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:33.735061578 +0000 UTC m=+1278.814552119" watchObservedRunningTime="2026-02-26 22:16:33.737499045 +0000 UTC m=+1278.816989586" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.773540 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.542712473 podStartE2EDuration="1m10.773516748s" podCreationTimestamp="2026-02-26 22:15:23 +0000 UTC" firstStartedPulling="2026-02-26 22:15:45.511664571 +0000 UTC m=+1230.591155112" lastFinishedPulling="2026-02-26 22:16:32.742468826 +0000 UTC m=+1277.821959387" observedRunningTime="2026-02-26 22:16:33.760717302 +0000 UTC m=+1278.840207843" watchObservedRunningTime="2026-02-26 22:16:33.773516748 +0000 UTC m=+1278.853007289" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.780539 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czp9f\" (UniqueName: \"kubernetes.io/projected/b682df9a-0ce1-463f-94a2-821085d209f4-kube-api-access-czp9f\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.780739 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-log-ovn\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.780769 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-scripts\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.780817 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run-ovn\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.780839 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-additional-scripts\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.780969 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.782315 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.782380 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run-ovn\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.782439 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-additional-scripts\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.783107 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-log-ovn\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.785671 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-scripts\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.864198 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czp9f\" (UniqueName: \"kubernetes.io/projected/b682df9a-0ce1-463f-94a2-821085d209f4-kube-api-access-czp9f\") pod \"ovn-controller-ddsmc-config-z42tc\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.931021 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-d7gr8"] Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.931410 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-711e-account-create-update-dnbzz"] Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.934466 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w8qxv"] Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.942104 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.948368 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-10a3-account-create-update-slfn2"] Feb 26 22:16:33 crc kubenswrapper[4910]: W0226 22:16:33.949346 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9c2599a_4a7d_4a05_9ffc_bab5996a139e.slice/crio-f829b5084977fd7120a1b3e8aa6fca13cba6fd7f6b44eba7723b85623de8b9c4 WatchSource:0}: Error finding container f829b5084977fd7120a1b3e8aa6fca13cba6fd7f6b44eba7723b85623de8b9c4: Status 404 returned error can't find the container with id f829b5084977fd7120a1b3e8aa6fca13cba6fd7f6b44eba7723b85623de8b9c4 Feb 26 22:16:33 crc kubenswrapper[4910]: W0226 22:16:33.954636 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad794ede_dbfc_4a7f_80b9_9742f1eaed3a.slice/crio-38eb25815c939cf5bc5c9eeb5fcb1d8d2f656a31891c8cce5a4fb75e96db4a58 WatchSource:0}: Error finding container 38eb25815c939cf5bc5c9eeb5fcb1d8d2f656a31891c8cce5a4fb75e96db4a58: Status 404 returned error can't find the container with id 38eb25815c939cf5bc5c9eeb5fcb1d8d2f656a31891c8cce5a4fb75e96db4a58 Feb 26 22:16:33 crc kubenswrapper[4910]: W0226 22:16:33.959659 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ce7064_53c4_4861_a60a_996a62f24e55.slice/crio-9e13ab52620b6a386c70a69351af964db7b4801be492e878552ed35b4402d3b1 WatchSource:0}: Error finding container 9e13ab52620b6a386c70a69351af964db7b4801be492e878552ed35b4402d3b1: Status 404 returned error can't find the container with id 9e13ab52620b6a386c70a69351af964db7b4801be492e878552ed35b4402d3b1 Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.961812 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0f00-account-create-update-6lkbt"] Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.974299 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k7hrg"] Feb 26 22:16:33 crc kubenswrapper[4910]: I0226 22:16:33.989839 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dc6mx"] Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.018368 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-szq7t"] Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.027147 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-5c27-account-create-update-xtbjg"] Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.516008 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ddsmc-config-z42tc"] Feb 26 22:16:34 crc kubenswrapper[4910]: W0226 22:16:34.567414 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb682df9a_0ce1_463f_94a2_821085d209f4.slice/crio-81fa6491770c128f07dc01dd149295d95ef884d9fc8106234640cf659b4caa3f WatchSource:0}: Error finding container 81fa6491770c128f07dc01dd149295d95ef884d9fc8106234640cf659b4caa3f: Status 404 returned error can't find the container with id 81fa6491770c128f07dc01dd149295d95ef884d9fc8106234640cf659b4caa3f Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.719682 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-d7gr8" event={"ID":"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a","Type":"ContainerStarted","Data":"4c719b86b40a17409845ae4bab97583caef70962ff8219608829595edb21d6e7"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.719724 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-d7gr8" event={"ID":"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a","Type":"ContainerStarted","Data":"38eb25815c939cf5bc5c9eeb5fcb1d8d2f656a31891c8cce5a4fb75e96db4a58"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.721460 4910 generic.go:334] "Generic (PLEG): container finished" podID="6113e4f6-f02a-4810-a3f2-74245b547b37" containerID="bd68f29d52f32a5d4c24e680695a8aebb59a743998602c5a4f288558399a49af" exitCode=0 Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.721503 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sdx7z" event={"ID":"6113e4f6-f02a-4810-a3f2-74245b547b37","Type":"ContainerDied","Data":"bd68f29d52f32a5d4c24e680695a8aebb59a743998602c5a4f288558399a49af"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.723045 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k7hrg" event={"ID":"87aa985f-3f4a-459d-b6be-6291e21c20c8","Type":"ContainerStarted","Data":"854934e0c361799fcaea98a14b1855a7a47354e149c0d0d39cb187a418beffc4"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.724945 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-10a3-account-create-update-slfn2" event={"ID":"52ce7064-53c4-4861-a60a-996a62f24e55","Type":"ContainerStarted","Data":"9153d35c2b4d119652b5efda1508b437585610285dafefd33403f7caa2b25872"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.724969 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-10a3-account-create-update-slfn2" event={"ID":"52ce7064-53c4-4861-a60a-996a62f24e55","Type":"ContainerStarted","Data":"9e13ab52620b6a386c70a69351af964db7b4801be492e878552ed35b4402d3b1"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.727289 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pfjh4" event={"ID":"a10fff0b-5682-4806-82e0-0d19db3deae4","Type":"ContainerStarted","Data":"f7d047433e6a305a7f2810f8182e6e30f3993fe0c3ba9853be094bd7ded8835d"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.738603 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-711e-account-create-update-dnbzz" event={"ID":"c9c2599a-4a7d-4a05-9ffc-bab5996a139e","Type":"ContainerStarted","Data":"82983bb1ce663ffcc1cee79ff7d1da673eec49353b7972c7be71c56c2a082cf7"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.738843 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-711e-account-create-update-dnbzz" event={"ID":"c9c2599a-4a7d-4a05-9ffc-bab5996a139e","Type":"ContainerStarted","Data":"f829b5084977fd7120a1b3e8aa6fca13cba6fd7f6b44eba7723b85623de8b9c4"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.750323 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f00-account-create-update-6lkbt" event={"ID":"cda59548-ac28-4988-88a5-f8770ab9c914","Type":"ContainerStarted","Data":"6a68e0cf6e38eefec3903e70cf247c952b39c01c3c8321549ff2046d31850c1d"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.760010 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-szq7t" event={"ID":"0230e94e-a757-4c5f-afed-0f4d1e769f7a","Type":"ContainerStarted","Data":"a74455a5f47d757a00e3c86d438938b937beacd350b0be3fd0cfda74b81d21c4"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.779000 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8qxv" event={"ID":"3f83ef0b-ac5c-47dd-a763-66b3c7f31391","Type":"ContainerStarted","Data":"74e33576d4683df14af0a3d9437a1ad96d63f89de7c176f66892e9ce059276ad"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.787421 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ddsmc-config-z42tc" event={"ID":"b682df9a-0ce1-463f-94a2-821085d209f4","Type":"ContainerStarted","Data":"81fa6491770c128f07dc01dd149295d95ef884d9fc8106234640cf659b4caa3f"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.788564 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dc6mx" event={"ID":"aa07e8fd-4975-4e27-9b98-ec23e75b271d","Type":"ContainerStarted","Data":"dd056395b7b82f3e6ee2b4432b9e5f33fca8366960f81df118c33d31330fd8ea"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.790839 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" event={"ID":"c1301e33-c3d6-405b-8762-41744119af4d","Type":"ContainerStarted","Data":"0bdfa3a6de975022b266b2e167429ce077d02a09ae3a88471691106525fb3fb2"} Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.807582 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-pfjh4" podStartSLOduration=10.046926075 podStartE2EDuration="23.807557204s" podCreationTimestamp="2026-02-26 22:16:11 +0000 UTC" firstStartedPulling="2026-02-26 22:16:19.072688497 +0000 UTC m=+1264.152179038" lastFinishedPulling="2026-02-26 22:16:32.833319626 +0000 UTC m=+1277.912810167" observedRunningTime="2026-02-26 22:16:34.778728421 +0000 UTC m=+1279.858218972" watchObservedRunningTime="2026-02-26 22:16:34.807557204 +0000 UTC m=+1279.887047745" Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.811785 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-10a3-account-create-update-slfn2" podStartSLOduration=5.811768781 podStartE2EDuration="5.811768781s" podCreationTimestamp="2026-02-26 22:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:34.799298614 +0000 UTC m=+1279.878789155" watchObservedRunningTime="2026-02-26 22:16:34.811768781 +0000 UTC m=+1279.891259322" Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.832267 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-711e-account-create-update-dnbzz" podStartSLOduration=5.8322483609999995 podStartE2EDuration="5.832248361s" podCreationTimestamp="2026-02-26 22:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:34.814475017 +0000 UTC m=+1279.893965568" watchObservedRunningTime="2026-02-26 22:16:34.832248361 +0000 UTC m=+1279.911738902" Feb 26 22:16:34 crc kubenswrapper[4910]: I0226 22:16:34.994477 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.800109 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k7hrg" event={"ID":"87aa985f-3f4a-459d-b6be-6291e21c20c8","Type":"ContainerStarted","Data":"89fc9b3aa4e08125f994d6b09f72fbb29dbafaab15e6d529088df229db73a802"} Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.801843 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8qxv" event={"ID":"3f83ef0b-ac5c-47dd-a763-66b3c7f31391","Type":"ContainerStarted","Data":"6e1ccaf85888b10f9e6db901f2fce1330456bc4d1a111f44ac1cc0bd551d4d40"} Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.803564 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ddsmc-config-z42tc" event={"ID":"b682df9a-0ce1-463f-94a2-821085d209f4","Type":"ContainerStarted","Data":"23561c0500be695516e100d5358f0b566e186eaa98c0706a936d2f051fc12b77"} Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.809684 4910 generic.go:334] "Generic (PLEG): container finished" podID="c9c2599a-4a7d-4a05-9ffc-bab5996a139e" containerID="82983bb1ce663ffcc1cee79ff7d1da673eec49353b7972c7be71c56c2a082cf7" exitCode=0 Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.809811 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-711e-account-create-update-dnbzz" event={"ID":"c9c2599a-4a7d-4a05-9ffc-bab5996a139e","Type":"ContainerDied","Data":"82983bb1ce663ffcc1cee79ff7d1da673eec49353b7972c7be71c56c2a082cf7"} Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.819784 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" event={"ID":"c1301e33-c3d6-405b-8762-41744119af4d","Type":"ContainerStarted","Data":"ead4b74f299d8d539aed77abf42ac189d2b0388f7802f1f35fab7978780ab19b"} Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.823378 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-k7hrg" podStartSLOduration=6.823362191 podStartE2EDuration="6.823362191s" podCreationTimestamp="2026-02-26 22:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:35.813898837 +0000 UTC m=+1280.893389378" watchObservedRunningTime="2026-02-26 22:16:35.823362191 +0000 UTC m=+1280.902852732" Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.825059 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f00-account-create-update-6lkbt" event={"ID":"cda59548-ac28-4988-88a5-f8770ab9c914","Type":"ContainerStarted","Data":"4b4e20de794ae8185d7e74f9c004ba3b4be045d2a8e34cb7e2c94859b15cd272"} Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.826116 4910 generic.go:334] "Generic (PLEG): container finished" podID="ad794ede-dbfc-4a7f-80b9-9742f1eaed3a" containerID="4c719b86b40a17409845ae4bab97583caef70962ff8219608829595edb21d6e7" exitCode=0 Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.826247 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-d7gr8" event={"ID":"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a","Type":"ContainerDied","Data":"4c719b86b40a17409845ae4bab97583caef70962ff8219608829595edb21d6e7"} Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.829363 4910 generic.go:334] "Generic (PLEG): container finished" podID="52ce7064-53c4-4861-a60a-996a62f24e55" containerID="9153d35c2b4d119652b5efda1508b437585610285dafefd33403f7caa2b25872" exitCode=0 Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.829434 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-10a3-account-create-update-slfn2" event={"ID":"52ce7064-53c4-4861-a60a-996a62f24e55","Type":"ContainerDied","Data":"9153d35c2b4d119652b5efda1508b437585610285dafefd33403f7caa2b25872"} Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.835212 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dc6mx" event={"ID":"aa07e8fd-4975-4e27-9b98-ec23e75b271d","Type":"ContainerStarted","Data":"293fd214ee91f4bc22769fac129adbe53eb861dfd62123477752fdf57a1cdec2"} Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.851207 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-w8qxv" podStartSLOduration=6.8511813759999995 podStartE2EDuration="6.851181376s" podCreationTimestamp="2026-02-26 22:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:35.841496646 +0000 UTC m=+1280.920987187" watchObservedRunningTime="2026-02-26 22:16:35.851181376 +0000 UTC m=+1280.930671917" Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.872993 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ddsmc-config-z42tc" podStartSLOduration=2.872956982 podStartE2EDuration="2.872956982s" podCreationTimestamp="2026-02-26 22:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:35.858036987 +0000 UTC m=+1280.937527518" watchObservedRunningTime="2026-02-26 22:16:35.872956982 +0000 UTC m=+1280.952447523" Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.911220 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" podStartSLOduration=6.911198977 podStartE2EDuration="6.911198977s" podCreationTimestamp="2026-02-26 22:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:35.901355653 +0000 UTC m=+1280.980846204" watchObservedRunningTime="2026-02-26 22:16:35.911198977 +0000 UTC m=+1280.990689518" Feb 26 22:16:35 crc kubenswrapper[4910]: I0226 22:16:35.947544 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-0f00-account-create-update-6lkbt" podStartSLOduration=6.947523739 podStartE2EDuration="6.947523739s" podCreationTimestamp="2026-02-26 22:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:35.937398957 +0000 UTC m=+1281.016889498" watchObservedRunningTime="2026-02-26 22:16:35.947523739 +0000 UTC m=+1281.027014280" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.236984 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.355443 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6113e4f6-f02a-4810-a3f2-74245b547b37-operator-scripts\") pod \"6113e4f6-f02a-4810-a3f2-74245b547b37\" (UID: \"6113e4f6-f02a-4810-a3f2-74245b547b37\") " Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.355539 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7kzt\" (UniqueName: \"kubernetes.io/projected/6113e4f6-f02a-4810-a3f2-74245b547b37-kube-api-access-r7kzt\") pod \"6113e4f6-f02a-4810-a3f2-74245b547b37\" (UID: \"6113e4f6-f02a-4810-a3f2-74245b547b37\") " Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.356357 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6113e4f6-f02a-4810-a3f2-74245b547b37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6113e4f6-f02a-4810-a3f2-74245b547b37" (UID: "6113e4f6-f02a-4810-a3f2-74245b547b37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.371392 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6113e4f6-f02a-4810-a3f2-74245b547b37-kube-api-access-r7kzt" (OuterVolumeSpecName: "kube-api-access-r7kzt") pod "6113e4f6-f02a-4810-a3f2-74245b547b37" (UID: "6113e4f6-f02a-4810-a3f2-74245b547b37"). InnerVolumeSpecName "kube-api-access-r7kzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.458391 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6113e4f6-f02a-4810-a3f2-74245b547b37-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.458438 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7kzt\" (UniqueName: \"kubernetes.io/projected/6113e4f6-f02a-4810-a3f2-74245b547b37-kube-api-access-r7kzt\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.763039 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.777057 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30b027eb-e942-4121-aebc-776d616b902e-etc-swift\") pod \"swift-storage-0\" (UID: \"30b027eb-e942-4121-aebc-776d616b902e\") " pod="openstack/swift-storage-0" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.858281 4910 generic.go:334] "Generic (PLEG): container finished" podID="cda59548-ac28-4988-88a5-f8770ab9c914" containerID="4b4e20de794ae8185d7e74f9c004ba3b4be045d2a8e34cb7e2c94859b15cd272" exitCode=0 Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.858411 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f00-account-create-update-6lkbt" event={"ID":"cda59548-ac28-4988-88a5-f8770ab9c914","Type":"ContainerDied","Data":"4b4e20de794ae8185d7e74f9c004ba3b4be045d2a8e34cb7e2c94859b15cd272"} Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.860851 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sdx7z" event={"ID":"6113e4f6-f02a-4810-a3f2-74245b547b37","Type":"ContainerDied","Data":"34825abf7425b7e2650411fb9beb7171718108881e21fbda575b13d916f79183"} Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.860888 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34825abf7425b7e2650411fb9beb7171718108881e21fbda575b13d916f79183" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.860915 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sdx7z" Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.863360 4910 generic.go:334] "Generic (PLEG): container finished" podID="87aa985f-3f4a-459d-b6be-6291e21c20c8" containerID="89fc9b3aa4e08125f994d6b09f72fbb29dbafaab15e6d529088df229db73a802" exitCode=0 Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.863423 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k7hrg" event={"ID":"87aa985f-3f4a-459d-b6be-6291e21c20c8","Type":"ContainerDied","Data":"89fc9b3aa4e08125f994d6b09f72fbb29dbafaab15e6d529088df229db73a802"} Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.865031 4910 generic.go:334] "Generic (PLEG): container finished" podID="3f83ef0b-ac5c-47dd-a763-66b3c7f31391" containerID="6e1ccaf85888b10f9e6db901f2fce1330456bc4d1a111f44ac1cc0bd551d4d40" exitCode=0 Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.865103 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8qxv" event={"ID":"3f83ef0b-ac5c-47dd-a763-66b3c7f31391","Type":"ContainerDied","Data":"6e1ccaf85888b10f9e6db901f2fce1330456bc4d1a111f44ac1cc0bd551d4d40"} Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.866910 4910 generic.go:334] "Generic (PLEG): container finished" podID="b682df9a-0ce1-463f-94a2-821085d209f4" containerID="23561c0500be695516e100d5358f0b566e186eaa98c0706a936d2f051fc12b77" exitCode=0 Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.866970 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ddsmc-config-z42tc" event={"ID":"b682df9a-0ce1-463f-94a2-821085d209f4","Type":"ContainerDied","Data":"23561c0500be695516e100d5358f0b566e186eaa98c0706a936d2f051fc12b77"} Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.874662 4910 generic.go:334] "Generic (PLEG): container finished" podID="aa07e8fd-4975-4e27-9b98-ec23e75b271d" containerID="293fd214ee91f4bc22769fac129adbe53eb861dfd62123477752fdf57a1cdec2" exitCode=0 Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.874745 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dc6mx" event={"ID":"aa07e8fd-4975-4e27-9b98-ec23e75b271d","Type":"ContainerDied","Data":"293fd214ee91f4bc22769fac129adbe53eb861dfd62123477752fdf57a1cdec2"} Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.891097 4910 generic.go:334] "Generic (PLEG): container finished" podID="c1301e33-c3d6-405b-8762-41744119af4d" containerID="ead4b74f299d8d539aed77abf42ac189d2b0388f7802f1f35fab7978780ab19b" exitCode=0 Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.891364 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" event={"ID":"c1301e33-c3d6-405b-8762-41744119af4d","Type":"ContainerDied","Data":"ead4b74f299d8d539aed77abf42ac189d2b0388f7802f1f35fab7978780ab19b"} Feb 26 22:16:36 crc kubenswrapper[4910]: I0226 22:16:36.971057 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 22:16:38 crc kubenswrapper[4910]: I0226 22:16:38.196319 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ddsmc" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.942252 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sdx7z"] Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.961409 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sdx7z"] Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.962370 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" event={"ID":"c1301e33-c3d6-405b-8762-41744119af4d","Type":"ContainerDied","Data":"0bdfa3a6de975022b266b2e167429ce077d02a09ae3a88471691106525fb3fb2"} Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.962410 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bdfa3a6de975022b266b2e167429ce077d02a09ae3a88471691106525fb3fb2" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.963829 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8qxv" event={"ID":"3f83ef0b-ac5c-47dd-a763-66b3c7f31391","Type":"ContainerDied","Data":"74e33576d4683df14af0a3d9437a1ad96d63f89de7c176f66892e9ce059276ad"} Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.963866 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74e33576d4683df14af0a3d9437a1ad96d63f89de7c176f66892e9ce059276ad" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.968088 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-10a3-account-create-update-slfn2" event={"ID":"52ce7064-53c4-4861-a60a-996a62f24e55","Type":"ContainerDied","Data":"9e13ab52620b6a386c70a69351af964db7b4801be492e878552ed35b4402d3b1"} Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.968127 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e13ab52620b6a386c70a69351af964db7b4801be492e878552ed35b4402d3b1" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.971834 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ddsmc-config-z42tc" event={"ID":"b682df9a-0ce1-463f-94a2-821085d209f4","Type":"ContainerDied","Data":"81fa6491770c128f07dc01dd149295d95ef884d9fc8106234640cf659b4caa3f"} Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.971871 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81fa6491770c128f07dc01dd149295d95ef884d9fc8106234640cf659b4caa3f" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.976762 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dc6mx" event={"ID":"aa07e8fd-4975-4e27-9b98-ec23e75b271d","Type":"ContainerDied","Data":"dd056395b7b82f3e6ee2b4432b9e5f33fca8366960f81df118c33d31330fd8ea"} Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.976794 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd056395b7b82f3e6ee2b4432b9e5f33fca8366960f81df118c33d31330fd8ea" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.978544 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-711e-account-create-update-dnbzz" event={"ID":"c9c2599a-4a7d-4a05-9ffc-bab5996a139e","Type":"ContainerDied","Data":"f829b5084977fd7120a1b3e8aa6fca13cba6fd7f6b44eba7723b85623de8b9c4"} Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.978643 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f829b5084977fd7120a1b3e8aa6fca13cba6fd7f6b44eba7723b85623de8b9c4" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.989523 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f00-account-create-update-6lkbt" event={"ID":"cda59548-ac28-4988-88a5-f8770ab9c914","Type":"ContainerDied","Data":"6a68e0cf6e38eefec3903e70cf247c952b39c01c3c8321549ff2046d31850c1d"} Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.989662 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a68e0cf6e38eefec3903e70cf247c952b39c01c3c8321549ff2046d31850c1d" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.990786 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.993078 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.993813 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-d7gr8" event={"ID":"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a","Type":"ContainerDied","Data":"38eb25815c939cf5bc5c9eeb5fcb1d8d2f656a31891c8cce5a4fb75e96db4a58"} Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.993852 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38eb25815c939cf5bc5c9eeb5fcb1d8d2f656a31891c8cce5a4fb75e96db4a58" Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.997976 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k7hrg" event={"ID":"87aa985f-3f4a-459d-b6be-6291e21c20c8","Type":"ContainerDied","Data":"854934e0c361799fcaea98a14b1855a7a47354e149c0d0d39cb187a418beffc4"} Feb 26 22:16:39 crc kubenswrapper[4910]: I0226 22:16:39.998060 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="854934e0c361799fcaea98a14b1855a7a47354e149c0d0d39cb187a418beffc4" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.103616 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.107482 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.125258 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ce7064-53c4-4861-a60a-996a62f24e55-operator-scripts\") pod \"52ce7064-53c4-4861-a60a-996a62f24e55\" (UID: \"52ce7064-53c4-4861-a60a-996a62f24e55\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.125362 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wbvl\" (UniqueName: \"kubernetes.io/projected/cda59548-ac28-4988-88a5-f8770ab9c914-kube-api-access-5wbvl\") pod \"cda59548-ac28-4988-88a5-f8770ab9c914\" (UID: \"cda59548-ac28-4988-88a5-f8770ab9c914\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.125460 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqwrg\" (UniqueName: \"kubernetes.io/projected/52ce7064-53c4-4861-a60a-996a62f24e55-kube-api-access-xqwrg\") pod \"52ce7064-53c4-4861-a60a-996a62f24e55\" (UID: \"52ce7064-53c4-4861-a60a-996a62f24e55\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.125518 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda59548-ac28-4988-88a5-f8770ab9c914-operator-scripts\") pod \"cda59548-ac28-4988-88a5-f8770ab9c914\" (UID: \"cda59548-ac28-4988-88a5-f8770ab9c914\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.125832 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ce7064-53c4-4861-a60a-996a62f24e55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52ce7064-53c4-4861-a60a-996a62f24e55" (UID: "52ce7064-53c4-4861-a60a-996a62f24e55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.125955 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda59548-ac28-4988-88a5-f8770ab9c914-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cda59548-ac28-4988-88a5-f8770ab9c914" (UID: "cda59548-ac28-4988-88a5-f8770ab9c914"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.126022 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ce7064-53c4-4861-a60a-996a62f24e55-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.130608 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.132091 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda59548-ac28-4988-88a5-f8770ab9c914-kube-api-access-5wbvl" (OuterVolumeSpecName: "kube-api-access-5wbvl") pod "cda59548-ac28-4988-88a5-f8770ab9c914" (UID: "cda59548-ac28-4988-88a5-f8770ab9c914"). InnerVolumeSpecName "kube-api-access-5wbvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.133425 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ce7064-53c4-4861-a60a-996a62f24e55-kube-api-access-xqwrg" (OuterVolumeSpecName: "kube-api-access-xqwrg") pod "52ce7064-53c4-4861-a60a-996a62f24e55" (UID: "52ce7064-53c4-4861-a60a-996a62f24e55"). InnerVolumeSpecName "kube-api-access-xqwrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.135903 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.151627 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.154948 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.159065 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.165651 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.175903 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227173 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-scripts\") pod \"b682df9a-0ce1-463f-94a2-821085d209f4\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227416 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czp9f\" (UniqueName: \"kubernetes.io/projected/b682df9a-0ce1-463f-94a2-821085d209f4-kube-api-access-czp9f\") pod \"b682df9a-0ce1-463f-94a2-821085d209f4\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227491 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-log-ovn\") pod \"b682df9a-0ce1-463f-94a2-821085d209f4\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227521 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run-ovn\") pod \"b682df9a-0ce1-463f-94a2-821085d209f4\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227542 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjvvf\" (UniqueName: \"kubernetes.io/projected/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-kube-api-access-bjvvf\") pod \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\" (UID: \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227565 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-operator-scripts\") pod \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\" (UID: \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227604 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7pmk\" (UniqueName: \"kubernetes.io/projected/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-kube-api-access-p7pmk\") pod \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\" (UID: \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227665 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwqlp\" (UniqueName: \"kubernetes.io/projected/87aa985f-3f4a-459d-b6be-6291e21c20c8-kube-api-access-pwqlp\") pod \"87aa985f-3f4a-459d-b6be-6291e21c20c8\" (UID: \"87aa985f-3f4a-459d-b6be-6291e21c20c8\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227684 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkqxw\" (UniqueName: \"kubernetes.io/projected/aa07e8fd-4975-4e27-9b98-ec23e75b271d-kube-api-access-vkqxw\") pod \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\" (UID: \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227757 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa07e8fd-4975-4e27-9b98-ec23e75b271d-operator-scripts\") pod \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\" (UID: \"aa07e8fd-4975-4e27-9b98-ec23e75b271d\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227807 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mvd7\" (UniqueName: \"kubernetes.io/projected/c1301e33-c3d6-405b-8762-41744119af4d-kube-api-access-2mvd7\") pod \"c1301e33-c3d6-405b-8762-41744119af4d\" (UID: \"c1301e33-c3d6-405b-8762-41744119af4d\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227823 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6vsx\" (UniqueName: \"kubernetes.io/projected/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-kube-api-access-n6vsx\") pod \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\" (UID: \"c9c2599a-4a7d-4a05-9ffc-bab5996a139e\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227842 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-operator-scripts\") pod \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\" (UID: \"ad794ede-dbfc-4a7f-80b9-9742f1eaed3a\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227855 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run\") pod \"b682df9a-0ce1-463f-94a2-821085d209f4\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227870 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87aa985f-3f4a-459d-b6be-6291e21c20c8-operator-scripts\") pod \"87aa985f-3f4a-459d-b6be-6291e21c20c8\" (UID: \"87aa985f-3f4a-459d-b6be-6291e21c20c8\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227887 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1301e33-c3d6-405b-8762-41744119af4d-operator-scripts\") pod \"c1301e33-c3d6-405b-8762-41744119af4d\" (UID: \"c1301e33-c3d6-405b-8762-41744119af4d\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227911 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-operator-scripts\") pod \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\" (UID: \"3f83ef0b-ac5c-47dd-a763-66b3c7f31391\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.227927 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-additional-scripts\") pod \"b682df9a-0ce1-463f-94a2-821085d209f4\" (UID: \"b682df9a-0ce1-463f-94a2-821085d209f4\") " Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.228064 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b682df9a-0ce1-463f-94a2-821085d209f4" (UID: "b682df9a-0ce1-463f-94a2-821085d209f4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.228598 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda59548-ac28-4988-88a5-f8770ab9c914-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.228618 4910 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.228629 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wbvl\" (UniqueName: \"kubernetes.io/projected/cda59548-ac28-4988-88a5-f8770ab9c914-kube-api-access-5wbvl\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.228641 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqwrg\" (UniqueName: \"kubernetes.io/projected/52ce7064-53c4-4861-a60a-996a62f24e55-kube-api-access-xqwrg\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.228679 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b682df9a-0ce1-463f-94a2-821085d209f4" (UID: "b682df9a-0ce1-463f-94a2-821085d209f4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.229020 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad794ede-dbfc-4a7f-80b9-9742f1eaed3a" (UID: "ad794ede-dbfc-4a7f-80b9-9742f1eaed3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.229290 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b682df9a-0ce1-463f-94a2-821085d209f4" (UID: "b682df9a-0ce1-463f-94a2-821085d209f4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.229052 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87aa985f-3f4a-459d-b6be-6291e21c20c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87aa985f-3f4a-459d-b6be-6291e21c20c8" (UID: "87aa985f-3f4a-459d-b6be-6291e21c20c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.229233 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run" (OuterVolumeSpecName: "var-run") pod "b682df9a-0ce1-463f-94a2-821085d209f4" (UID: "b682df9a-0ce1-463f-94a2-821085d209f4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.229266 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1301e33-c3d6-405b-8762-41744119af4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1301e33-c3d6-405b-8762-41744119af4d" (UID: "c1301e33-c3d6-405b-8762-41744119af4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.229559 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f83ef0b-ac5c-47dd-a763-66b3c7f31391" (UID: "3f83ef0b-ac5c-47dd-a763-66b3c7f31391"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.229650 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9c2599a-4a7d-4a05-9ffc-bab5996a139e" (UID: "c9c2599a-4a7d-4a05-9ffc-bab5996a139e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.230140 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa07e8fd-4975-4e27-9b98-ec23e75b271d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa07e8fd-4975-4e27-9b98-ec23e75b271d" (UID: "aa07e8fd-4975-4e27-9b98-ec23e75b271d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.230193 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-scripts" (OuterVolumeSpecName: "scripts") pod "b682df9a-0ce1-463f-94a2-821085d209f4" (UID: "b682df9a-0ce1-463f-94a2-821085d209f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.235701 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-kube-api-access-p7pmk" (OuterVolumeSpecName: "kube-api-access-p7pmk") pod "3f83ef0b-ac5c-47dd-a763-66b3c7f31391" (UID: "3f83ef0b-ac5c-47dd-a763-66b3c7f31391"). InnerVolumeSpecName "kube-api-access-p7pmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.236351 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1301e33-c3d6-405b-8762-41744119af4d-kube-api-access-2mvd7" (OuterVolumeSpecName: "kube-api-access-2mvd7") pod "c1301e33-c3d6-405b-8762-41744119af4d" (UID: "c1301e33-c3d6-405b-8762-41744119af4d"). InnerVolumeSpecName "kube-api-access-2mvd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.236493 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa07e8fd-4975-4e27-9b98-ec23e75b271d-kube-api-access-vkqxw" (OuterVolumeSpecName: "kube-api-access-vkqxw") pod "aa07e8fd-4975-4e27-9b98-ec23e75b271d" (UID: "aa07e8fd-4975-4e27-9b98-ec23e75b271d"). InnerVolumeSpecName "kube-api-access-vkqxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.238292 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87aa985f-3f4a-459d-b6be-6291e21c20c8-kube-api-access-pwqlp" (OuterVolumeSpecName: "kube-api-access-pwqlp") pod "87aa985f-3f4a-459d-b6be-6291e21c20c8" (UID: "87aa985f-3f4a-459d-b6be-6291e21c20c8"). InnerVolumeSpecName "kube-api-access-pwqlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.238563 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-kube-api-access-bjvvf" (OuterVolumeSpecName: "kube-api-access-bjvvf") pod "ad794ede-dbfc-4a7f-80b9-9742f1eaed3a" (UID: "ad794ede-dbfc-4a7f-80b9-9742f1eaed3a"). InnerVolumeSpecName "kube-api-access-bjvvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.239303 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-kube-api-access-n6vsx" (OuterVolumeSpecName: "kube-api-access-n6vsx") pod "c9c2599a-4a7d-4a05-9ffc-bab5996a139e" (UID: "c9c2599a-4a7d-4a05-9ffc-bab5996a139e"). InnerVolumeSpecName "kube-api-access-n6vsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.239346 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b682df9a-0ce1-463f-94a2-821085d209f4-kube-api-access-czp9f" (OuterVolumeSpecName: "kube-api-access-czp9f") pod "b682df9a-0ce1-463f-94a2-821085d209f4" (UID: "b682df9a-0ce1-463f-94a2-821085d209f4"). InnerVolumeSpecName "kube-api-access-czp9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330337 4910 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330370 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjvvf\" (UniqueName: \"kubernetes.io/projected/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-kube-api-access-bjvvf\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330384 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330396 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7pmk\" (UniqueName: \"kubernetes.io/projected/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-kube-api-access-p7pmk\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330405 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwqlp\" (UniqueName: \"kubernetes.io/projected/87aa985f-3f4a-459d-b6be-6291e21c20c8-kube-api-access-pwqlp\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330413 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkqxw\" (UniqueName: \"kubernetes.io/projected/aa07e8fd-4975-4e27-9b98-ec23e75b271d-kube-api-access-vkqxw\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330421 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa07e8fd-4975-4e27-9b98-ec23e75b271d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330429 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mvd7\" (UniqueName: \"kubernetes.io/projected/c1301e33-c3d6-405b-8762-41744119af4d-kube-api-access-2mvd7\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330439 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6vsx\" (UniqueName: \"kubernetes.io/projected/c9c2599a-4a7d-4a05-9ffc-bab5996a139e-kube-api-access-n6vsx\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330448 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330457 4910 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b682df9a-0ce1-463f-94a2-821085d209f4-var-run\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330466 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87aa985f-3f4a-459d-b6be-6291e21c20c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330474 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1301e33-c3d6-405b-8762-41744119af4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330482 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f83ef0b-ac5c-47dd-a763-66b3c7f31391-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330490 4910 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330499 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b682df9a-0ce1-463f-94a2-821085d209f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.330508 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czp9f\" (UniqueName: \"kubernetes.io/projected/b682df9a-0ce1-463f-94a2-821085d209f4-kube-api-access-czp9f\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:40 crc kubenswrapper[4910]: W0226 22:16:40.758936 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30b027eb_e942_4121_aebc_776d616b902e.slice/crio-27dd20ca21eac88068d26aabbe9d07deab500a8232a39d31f75f7a5a49ccb102 WatchSource:0}: Error finding container 27dd20ca21eac88068d26aabbe9d07deab500a8232a39d31f75f7a5a49ccb102: Status 404 returned error can't find the container with id 27dd20ca21eac88068d26aabbe9d07deab500a8232a39d31f75f7a5a49ccb102 Feb 26 22:16:40 crc kubenswrapper[4910]: I0226 22:16:40.777664 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.014721 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"27dd20ca21eac88068d26aabbe9d07deab500a8232a39d31f75f7a5a49ccb102"} Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.017723 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-711e-account-create-update-dnbzz" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.017773 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k7hrg" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.017755 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f00-account-create-update-6lkbt" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.017768 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-szq7t" event={"ID":"0230e94e-a757-4c5f-afed-0f4d1e769f7a","Type":"ContainerStarted","Data":"c2509f0feb5a78a50198f375a96870f2e809c20dd901317266094bc9b17dd888"} Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.017835 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-d7gr8" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.017839 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8qxv" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.017864 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dc6mx" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.017872 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ddsmc-config-z42tc" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.017879 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-10a3-account-create-update-slfn2" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.018004 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5c27-account-create-update-xtbjg" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.023531 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.050010 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-szq7t" podStartSLOduration=6.08318805 podStartE2EDuration="12.04999563s" podCreationTimestamp="2026-02-26 22:16:29 +0000 UTC" firstStartedPulling="2026-02-26 22:16:33.984323719 +0000 UTC m=+1279.063814260" lastFinishedPulling="2026-02-26 22:16:39.951131289 +0000 UTC m=+1285.030621840" observedRunningTime="2026-02-26 22:16:41.043636652 +0000 UTC m=+1286.123127203" watchObservedRunningTime="2026-02-26 22:16:41.04999563 +0000 UTC m=+1286.129486171" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.343597 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ddsmc-config-z42tc"] Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.350728 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ddsmc-config-z42tc"] Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.917861 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6113e4f6-f02a-4810-a3f2-74245b547b37" path="/var/lib/kubelet/pods/6113e4f6-f02a-4810-a3f2-74245b547b37/volumes" Feb 26 22:16:41 crc kubenswrapper[4910]: I0226 22:16:41.918541 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b682df9a-0ce1-463f-94a2-821085d209f4" path="/var/lib/kubelet/pods/b682df9a-0ce1-463f-94a2-821085d209f4/volumes" Feb 26 22:16:42 crc kubenswrapper[4910]: I0226 22:16:42.027961 4910 generic.go:334] "Generic (PLEG): container finished" podID="a10fff0b-5682-4806-82e0-0d19db3deae4" containerID="f7d047433e6a305a7f2810f8182e6e30f3993fe0c3ba9853be094bd7ded8835d" exitCode=0 Feb 26 22:16:42 crc kubenswrapper[4910]: I0226 22:16:42.028060 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pfjh4" event={"ID":"a10fff0b-5682-4806-82e0-0d19db3deae4","Type":"ContainerDied","Data":"f7d047433e6a305a7f2810f8182e6e30f3993fe0c3ba9853be094bd7ded8835d"} Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.042048 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"c2423defda57602cd9e4d2855f14ecae950b6b46ddf661ac7a6c205fa9d762f8"} Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.042368 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"df7f7cab39a474ff58e07250bedb6b6c6b609af25ae9e73e801f14de1d8fdca2"} Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.042379 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"aa25bbfb871ed24066a39144d965fb96bda8f8e326d9805d054417e509af4d68"} Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.508666 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.595057 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-db-sync-config-data\") pod \"a10fff0b-5682-4806-82e0-0d19db3deae4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.595126 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-combined-ca-bundle\") pod \"a10fff0b-5682-4806-82e0-0d19db3deae4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.595325 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7rzh\" (UniqueName: \"kubernetes.io/projected/a10fff0b-5682-4806-82e0-0d19db3deae4-kube-api-access-w7rzh\") pod \"a10fff0b-5682-4806-82e0-0d19db3deae4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.595376 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-config-data\") pod \"a10fff0b-5682-4806-82e0-0d19db3deae4\" (UID: \"a10fff0b-5682-4806-82e0-0d19db3deae4\") " Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.600760 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10fff0b-5682-4806-82e0-0d19db3deae4-kube-api-access-w7rzh" (OuterVolumeSpecName: "kube-api-access-w7rzh") pod "a10fff0b-5682-4806-82e0-0d19db3deae4" (UID: "a10fff0b-5682-4806-82e0-0d19db3deae4"). InnerVolumeSpecName "kube-api-access-w7rzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.602986 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a10fff0b-5682-4806-82e0-0d19db3deae4" (UID: "a10fff0b-5682-4806-82e0-0d19db3deae4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.636613 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.637281 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="config-reloader" containerID="cri-o://ba9e28ab288bac4d0a62f2effca6d14bea90aeac5171ed430bdfcac17a5407be" gracePeriod=600 Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.636974 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="prometheus" containerID="cri-o://dc36ca0064009a273deeee99f6ddba18fa15369d2698389d7c803044ccba12cf" gracePeriod=600 Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.638634 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="thanos-sidecar" containerID="cri-o://1a437785b48de08d2d07c586be980cc51f97eeb0319b64417c9d1493655a4786" gracePeriod=600 Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.644566 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a10fff0b-5682-4806-82e0-0d19db3deae4" (UID: "a10fff0b-5682-4806-82e0-0d19db3deae4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.657398 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-config-data" (OuterVolumeSpecName: "config-data") pod "a10fff0b-5682-4806-82e0-0d19db3deae4" (UID: "a10fff0b-5682-4806-82e0-0d19db3deae4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.662083 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3e40b05d-8071-4f6b-b2ab-160931200e8a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.697862 4910 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.697908 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.697919 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7rzh\" (UniqueName: \"kubernetes.io/projected/a10fff0b-5682-4806-82e0-0d19db3deae4-kube-api-access-w7rzh\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:43 crc kubenswrapper[4910]: I0226 22:16:43.697930 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10fff0b-5682-4806-82e0-0d19db3deae4-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:43 crc kubenswrapper[4910]: E0226 22:16:43.858113 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f98425b_65de_48d2_be21_2c443218eacd.slice/crio-1a437785b48de08d2d07c586be980cc51f97eeb0319b64417c9d1493655a4786.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f98425b_65de_48d2_be21_2c443218eacd.slice/crio-conmon-1a437785b48de08d2d07c586be980cc51f97eeb0319b64417c9d1493655a4786.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f98425b_65de_48d2_be21_2c443218eacd.slice/crio-dc36ca0064009a273deeee99f6ddba18fa15369d2698389d7c803044ccba12cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f98425b_65de_48d2_be21_2c443218eacd.slice/crio-conmon-dc36ca0064009a273deeee99f6ddba18fa15369d2698389d7c803044ccba12cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f98425b_65de_48d2_be21_2c443218eacd.slice/crio-conmon-ba9e28ab288bac4d0a62f2effca6d14bea90aeac5171ed430bdfcac17a5407be.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.055747 4910 generic.go:334] "Generic (PLEG): container finished" podID="2f98425b-65de-48d2-be21-2c443218eacd" containerID="1a437785b48de08d2d07c586be980cc51f97eeb0319b64417c9d1493655a4786" exitCode=0 Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.055996 4910 generic.go:334] "Generic (PLEG): container finished" podID="2f98425b-65de-48d2-be21-2c443218eacd" containerID="ba9e28ab288bac4d0a62f2effca6d14bea90aeac5171ed430bdfcac17a5407be" exitCode=0 Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.056004 4910 generic.go:334] "Generic (PLEG): container finished" podID="2f98425b-65de-48d2-be21-2c443218eacd" containerID="dc36ca0064009a273deeee99f6ddba18fa15369d2698389d7c803044ccba12cf" exitCode=0 Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.055816 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerDied","Data":"1a437785b48de08d2d07c586be980cc51f97eeb0319b64417c9d1493655a4786"} Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.056087 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerDied","Data":"ba9e28ab288bac4d0a62f2effca6d14bea90aeac5171ed430bdfcac17a5407be"} Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.056139 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerDied","Data":"dc36ca0064009a273deeee99f6ddba18fa15369d2698389d7c803044ccba12cf"} Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.064688 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pfjh4" event={"ID":"a10fff0b-5682-4806-82e0-0d19db3deae4","Type":"ContainerDied","Data":"330a915f26e05b0eb5934736862aabc81209bd4a1b901b93cedf410f8052ce04"} Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.064723 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pfjh4" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.064733 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="330a915f26e05b0eb5934736862aabc81209bd4a1b901b93cedf410f8052ce04" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.076239 4910 generic.go:334] "Generic (PLEG): container finished" podID="0230e94e-a757-4c5f-afed-0f4d1e769f7a" containerID="c2509f0feb5a78a50198f375a96870f2e809c20dd901317266094bc9b17dd888" exitCode=0 Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.076309 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-szq7t" event={"ID":"0230e94e-a757-4c5f-afed-0f4d1e769f7a","Type":"ContainerDied","Data":"c2509f0feb5a78a50198f375a96870f2e809c20dd901317266094bc9b17dd888"} Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.083085 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"07b618098a4c31727a9dedfe5c8685078887b7bb182b81823a50cf61f3fc3c13"} Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.344957 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-6bh9x"] Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345330 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6113e4f6-f02a-4810-a3f2-74245b547b37" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345349 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6113e4f6-f02a-4810-a3f2-74245b547b37" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345359 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1301e33-c3d6-405b-8762-41744119af4d" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345366 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1301e33-c3d6-405b-8762-41744119af4d" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345373 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f83ef0b-ac5c-47dd-a763-66b3c7f31391" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345380 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f83ef0b-ac5c-47dd-a763-66b3c7f31391" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345416 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda59548-ac28-4988-88a5-f8770ab9c914" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345422 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda59548-ac28-4988-88a5-f8770ab9c914" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345434 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c2599a-4a7d-4a05-9ffc-bab5996a139e" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345440 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c2599a-4a7d-4a05-9ffc-bab5996a139e" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345457 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa07e8fd-4975-4e27-9b98-ec23e75b271d" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345463 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa07e8fd-4975-4e27-9b98-ec23e75b271d" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345473 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10fff0b-5682-4806-82e0-0d19db3deae4" containerName="glance-db-sync" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345478 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10fff0b-5682-4806-82e0-0d19db3deae4" containerName="glance-db-sync" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345487 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ce7064-53c4-4861-a60a-996a62f24e55" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345493 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ce7064-53c4-4861-a60a-996a62f24e55" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345501 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87aa985f-3f4a-459d-b6be-6291e21c20c8" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345507 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="87aa985f-3f4a-459d-b6be-6291e21c20c8" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345521 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b682df9a-0ce1-463f-94a2-821085d209f4" containerName="ovn-config" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345526 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="b682df9a-0ce1-463f-94a2-821085d209f4" containerName="ovn-config" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.345536 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad794ede-dbfc-4a7f-80b9-9742f1eaed3a" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345542 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad794ede-dbfc-4a7f-80b9-9742f1eaed3a" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345699 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa07e8fd-4975-4e27-9b98-ec23e75b271d" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345712 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1301e33-c3d6-405b-8762-41744119af4d" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345719 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f83ef0b-ac5c-47dd-a763-66b3c7f31391" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345741 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad794ede-dbfc-4a7f-80b9-9742f1eaed3a" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345751 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="87aa985f-3f4a-459d-b6be-6291e21c20c8" containerName="mariadb-database-create" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345762 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10fff0b-5682-4806-82e0-0d19db3deae4" containerName="glance-db-sync" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345771 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="b682df9a-0ce1-463f-94a2-821085d209f4" containerName="ovn-config" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345778 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda59548-ac28-4988-88a5-f8770ab9c914" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345785 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ce7064-53c4-4861-a60a-996a62f24e55" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345797 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c2599a-4a7d-4a05-9ffc-bab5996a139e" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.345808 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="6113e4f6-f02a-4810-a3f2-74245b547b37" containerName="mariadb-account-create-update" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.346777 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.355996 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-6bh9x"] Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.521041 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vnwv\" (UniqueName: \"kubernetes.io/projected/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-kube-api-access-2vnwv\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.521109 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-config\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.521227 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.521259 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.521295 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-dns-svc\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.585086 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.624604 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-config\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.624733 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.624755 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.624791 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-dns-svc\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.624855 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vnwv\" (UniqueName: \"kubernetes.io/projected/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-kube-api-access-2vnwv\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.625821 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.625856 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-dns-svc\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.625882 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-config\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.626517 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.642531 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vnwv\" (UniqueName: \"kubernetes.io/projected/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-kube-api-access-2vnwv\") pod \"dnsmasq-dns-74dc88fc-6bh9x\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.668621 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.725501 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-tls-assets\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.725580 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-web-config\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.725609 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-1\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.725657 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f98425b-65de-48d2-be21-2c443218eacd-config-out\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.725753 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-config\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.725777 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22w4f\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-kube-api-access-22w4f\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.725816 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-2\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.725870 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-thanos-prometheus-http-client-file\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.725917 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-0\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.726079 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") pod \"2f98425b-65de-48d2-be21-2c443218eacd\" (UID: \"2f98425b-65de-48d2-be21-2c443218eacd\") " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.729142 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.729874 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.730357 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.731302 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.731753 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.732175 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-kube-api-access-22w4f" (OuterVolumeSpecName: "kube-api-access-22w4f") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "kube-api-access-22w4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.737249 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f98425b-65de-48d2-be21-2c443218eacd-config-out" (OuterVolumeSpecName: "config-out") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.741236 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-config" (OuterVolumeSpecName: "config") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.744328 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "pvc-fee83ae0-3c6e-418a-b853-8f63917457f0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.762329 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-web-config" (OuterVolumeSpecName: "web-config") pod "2f98425b-65de-48d2-be21-2c443218eacd" (UID: "2f98425b-65de-48d2-be21-2c443218eacd"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831344 4910 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2f98425b-65de-48d2-be21-2c443218eacd-config-out\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831374 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831383 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22w4f\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-kube-api-access-22w4f\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831394 4910 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831403 4910 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831412 4910 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831447 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") on node \"crc\" " Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831458 4910 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2f98425b-65de-48d2-be21-2c443218eacd-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831466 4910 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2f98425b-65de-48d2-be21-2c443218eacd-web-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.831475 4910 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2f98425b-65de-48d2-be21-2c443218eacd-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.857121 4910 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.857315 4910 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fee83ae0-3c6e-418a-b853-8f63917457f0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0") on node "crc" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.911304 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-t6mjz"] Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.911702 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="thanos-sidecar" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.911720 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="thanos-sidecar" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.911738 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="init-config-reloader" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.911745 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="init-config-reloader" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.911772 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="prometheus" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.911778 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="prometheus" Feb 26 22:16:44 crc kubenswrapper[4910]: E0226 22:16:44.911790 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="config-reloader" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.911797 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="config-reloader" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.911950 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="prometheus" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.911969 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="thanos-sidecar" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.911984 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f98425b-65de-48d2-be21-2c443218eacd" containerName="config-reloader" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.912614 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.916577 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.922938 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t6mjz"] Feb 26 22:16:44 crc kubenswrapper[4910]: I0226 22:16:44.943678 4910 reconciler_common.go:293] "Volume detached for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.045803 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-operator-scripts\") pod \"root-account-create-update-t6mjz\" (UID: \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\") " pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.046122 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lfbf\" (UniqueName: \"kubernetes.io/projected/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-kube-api-access-4lfbf\") pod \"root-account-create-update-t6mjz\" (UID: \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\") " pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.092820 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"1d32c34482d24b82866b727d0d0938ccebe5396bf0348513499dcbe68b4395e1"} Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.092872 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"b6280778219d5747644725f1011e84b375965f0884390fec8eec6f61e9d79c25"} Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.095458 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.095541 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2f98425b-65de-48d2-be21-2c443218eacd","Type":"ContainerDied","Data":"19555b113776243ace5d71e3f3e5aff3a23663e74fba2032689dbc7a10d92838"} Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.095583 4910 scope.go:117] "RemoveContainer" containerID="1a437785b48de08d2d07c586be980cc51f97eeb0319b64417c9d1493655a4786" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.118903 4910 scope.go:117] "RemoveContainer" containerID="ba9e28ab288bac4d0a62f2effca6d14bea90aeac5171ed430bdfcac17a5407be" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.141280 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.147672 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-operator-scripts\") pod \"root-account-create-update-t6mjz\" (UID: \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\") " pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.147767 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lfbf\" (UniqueName: \"kubernetes.io/projected/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-kube-api-access-4lfbf\") pod \"root-account-create-update-t6mjz\" (UID: \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\") " pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.148542 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-operator-scripts\") pod \"root-account-create-update-t6mjz\" (UID: \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\") " pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.160363 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.167017 4910 scope.go:117] "RemoveContainer" containerID="dc36ca0064009a273deeee99f6ddba18fa15369d2698389d7c803044ccba12cf" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.173767 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lfbf\" (UniqueName: \"kubernetes.io/projected/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-kube-api-access-4lfbf\") pod \"root-account-create-update-t6mjz\" (UID: \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\") " pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.174897 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.178945 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.192962 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.193185 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.193203 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.193274 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.193325 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.193529 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.193734 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-v6djc" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.193824 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.197434 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.202814 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.232103 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.260113 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-6bh9x"] Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.358980 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359062 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5ac37f44-e173-4927-b8ea-44741aa983c0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359147 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5ac37f44-e173-4927-b8ea-44741aa983c0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359201 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359302 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359329 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5ac37f44-e173-4927-b8ea-44741aa983c0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359349 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359365 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gpg2\" (UniqueName: \"kubernetes.io/projected/5ac37f44-e173-4927-b8ea-44741aa983c0-kube-api-access-9gpg2\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359415 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5ac37f44-e173-4927-b8ea-44741aa983c0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359440 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359496 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359574 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-config\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.359705 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5ac37f44-e173-4927-b8ea-44741aa983c0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.446806 4910 scope.go:117] "RemoveContainer" containerID="8b868dc2396218530dff7210a284cc3ac6f666370d5b0e57cb6f84461df2591c" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461473 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461529 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5ac37f44-e173-4927-b8ea-44741aa983c0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461579 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5ac37f44-e173-4927-b8ea-44741aa983c0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461601 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461632 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461654 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5ac37f44-e173-4927-b8ea-44741aa983c0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461673 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461689 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gpg2\" (UniqueName: \"kubernetes.io/projected/5ac37f44-e173-4927-b8ea-44741aa983c0-kube-api-access-9gpg2\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461723 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5ac37f44-e173-4927-b8ea-44741aa983c0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461743 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461778 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461812 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-config\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.461832 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5ac37f44-e173-4927-b8ea-44741aa983c0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.462568 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5ac37f44-e173-4927-b8ea-44741aa983c0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.462804 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5ac37f44-e173-4927-b8ea-44741aa983c0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.463314 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5ac37f44-e173-4927-b8ea-44741aa983c0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.467388 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.470146 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.470206 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b543beeb658b5e1c7e7ad53857c548a6d13c1d75fd19a37886a029b0ffecc6a4/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.471294 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.471927 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5ac37f44-e173-4927-b8ea-44741aa983c0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.473030 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.473413 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-config\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.473674 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.482280 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gpg2\" (UniqueName: \"kubernetes.io/projected/5ac37f44-e173-4927-b8ea-44741aa983c0-kube-api-access-9gpg2\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.484965 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5ac37f44-e173-4927-b8ea-44741aa983c0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.502522 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5ac37f44-e173-4927-b8ea-44741aa983c0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.546573 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fee83ae0-3c6e-418a-b853-8f63917457f0\") pod \"prometheus-metric-storage-0\" (UID: \"5ac37f44-e173-4927-b8ea-44741aa983c0\") " pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.599632 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.664004 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf9jm\" (UniqueName: \"kubernetes.io/projected/0230e94e-a757-4c5f-afed-0f4d1e769f7a-kube-api-access-bf9jm\") pod \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.664380 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-combined-ca-bundle\") pod \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.664457 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-config-data\") pod \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\" (UID: \"0230e94e-a757-4c5f-afed-0f4d1e769f7a\") " Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.669643 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0230e94e-a757-4c5f-afed-0f4d1e769f7a-kube-api-access-bf9jm" (OuterVolumeSpecName: "kube-api-access-bf9jm") pod "0230e94e-a757-4c5f-afed-0f4d1e769f7a" (UID: "0230e94e-a757-4c5f-afed-0f4d1e769f7a"). InnerVolumeSpecName "kube-api-access-bf9jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.711057 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-config-data" (OuterVolumeSpecName: "config-data") pod "0230e94e-a757-4c5f-afed-0f4d1e769f7a" (UID: "0230e94e-a757-4c5f-afed-0f4d1e769f7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.713616 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0230e94e-a757-4c5f-afed-0f4d1e769f7a" (UID: "0230e94e-a757-4c5f-afed-0f4d1e769f7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.736503 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.756485 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t6mjz"] Feb 26 22:16:45 crc kubenswrapper[4910]: W0226 22:16:45.761731 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42b4f12_ddac_4a7e_8fcf_6ce66da40085.slice/crio-d051d0112db8598588fca3ca861a170485a326ea64d5811add07b1ba8c4d21b5 WatchSource:0}: Error finding container d051d0112db8598588fca3ca861a170485a326ea64d5811add07b1ba8c4d21b5: Status 404 returned error can't find the container with id d051d0112db8598588fca3ca861a170485a326ea64d5811add07b1ba8c4d21b5 Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.766872 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf9jm\" (UniqueName: \"kubernetes.io/projected/0230e94e-a757-4c5f-afed-0f4d1e769f7a-kube-api-access-bf9jm\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.766901 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.766911 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0230e94e-a757-4c5f-afed-0f4d1e769f7a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:45 crc kubenswrapper[4910]: I0226 22:16:45.920433 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f98425b-65de-48d2-be21-2c443218eacd" path="/var/lib/kubelet/pods/2f98425b-65de-48d2-be21-2c443218eacd/volumes" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.108300 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-szq7t" event={"ID":"0230e94e-a757-4c5f-afed-0f4d1e769f7a","Type":"ContainerDied","Data":"a74455a5f47d757a00e3c86d438938b937beacd350b0be3fd0cfda74b81d21c4"} Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.108352 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74455a5f47d757a00e3c86d438938b937beacd350b0be3fd0cfda74b81d21c4" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.108435 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-szq7t" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.112105 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" event={"ID":"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86","Type":"ContainerDied","Data":"f489371b117fa1a4cc97bdea7fc723995c555b4def0399052eac3827164dd2d9"} Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.112265 4910 generic.go:334] "Generic (PLEG): container finished" podID="ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" containerID="f489371b117fa1a4cc97bdea7fc723995c555b4def0399052eac3827164dd2d9" exitCode=0 Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.112424 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" event={"ID":"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86","Type":"ContainerStarted","Data":"f518f60bffb99c7cf6e9eecbd0db5ff00b4119b8235e601ab28816e87fd3c774"} Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.115517 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t6mjz" event={"ID":"e42b4f12-ddac-4a7e-8fcf-6ce66da40085","Type":"ContainerStarted","Data":"1ddc8fe97c6e8d049929c276f532f5e90094b29fb205f36691dbacfec23444e6"} Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.115559 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t6mjz" event={"ID":"e42b4f12-ddac-4a7e-8fcf-6ce66da40085","Type":"ContainerStarted","Data":"d051d0112db8598588fca3ca861a170485a326ea64d5811add07b1ba8c4d21b5"} Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.121041 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"e0f422d739cfac723653245f85014505c1b9eec1ab86fb41e4185cf305ecf32f"} Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.121086 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"c7ccada9b5e5b687054c444303db47d03a16f0c7963e62b1e2c69983ebb19c64"} Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.191144 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.345570 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-6bh9x"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.357205 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xkqnq"] Feb 26 22:16:46 crc kubenswrapper[4910]: E0226 22:16:46.357800 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0230e94e-a757-4c5f-afed-0f4d1e769f7a" containerName="keystone-db-sync" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.360327 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0230e94e-a757-4c5f-afed-0f4d1e769f7a" containerName="keystone-db-sync" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.360820 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0230e94e-a757-4c5f-afed-0f4d1e769f7a" containerName="keystone-db-sync" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.364869 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.364976 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dkpbm"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.375447 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.379245 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.381773 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.381977 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.382098 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.382275 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r29m2" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.401684 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xkqnq"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.437854 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dkpbm"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492095 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-dns-svc\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492144 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xssqm\" (UniqueName: \"kubernetes.io/projected/097aefae-b601-4bd6-83d7-0fc82ae942c1-kube-api-access-xssqm\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492215 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-credential-keys\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492236 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-config\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492297 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-scripts\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492354 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492384 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchq4\" (UniqueName: \"kubernetes.io/projected/d9559b53-2f73-4123-8bd9-0848adca6e34-kube-api-access-nchq4\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492418 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-combined-ca-bundle\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492447 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-config-data\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492506 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-fernet-keys\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.492539 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595271 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xssqm\" (UniqueName: \"kubernetes.io/projected/097aefae-b601-4bd6-83d7-0fc82ae942c1-kube-api-access-xssqm\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595316 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-dns-svc\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595347 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-credential-keys\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595366 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-config\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595406 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-scripts\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595445 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595464 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchq4\" (UniqueName: \"kubernetes.io/projected/d9559b53-2f73-4123-8bd9-0848adca6e34-kube-api-access-nchq4\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595491 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-combined-ca-bundle\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595513 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-config-data\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595566 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-fernet-keys\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.595602 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.596381 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.596886 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.601418 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-dns-svc\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.605895 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-config\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.728037 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6xcs7"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.729145 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.743617 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ll6bd" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.743793 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.743896 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.793833 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-credential-keys\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.795081 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-fernet-keys\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.799300 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-config-data\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.801536 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-scripts\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.801763 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6xcs7"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.802095 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchq4\" (UniqueName: \"kubernetes.io/projected/d9559b53-2f73-4123-8bd9-0848adca6e34-kube-api-access-nchq4\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.803846 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xssqm\" (UniqueName: \"kubernetes.io/projected/097aefae-b601-4bd6-83d7-0fc82ae942c1-kube-api-access-xssqm\") pod \"dnsmasq-dns-7d5679f497-dkpbm\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.804463 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-combined-ca-bundle\") pod \"keystone-bootstrap-xkqnq\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.805061 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d861622f-ed9a-4709-824c-bb291c4639a5-etc-machine-id\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.805112 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-scripts\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.805149 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-config-data\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.805203 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-db-sync-config-data\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.805283 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-combined-ca-bundle\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.805343 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ct4z\" (UniqueName: \"kubernetes.io/projected/d861622f-ed9a-4709-824c-bb291c4639a5-kube-api-access-6ct4z\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.851649 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.856265 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.858179 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.859639 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.871504 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.878773 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.906902 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ct4z\" (UniqueName: \"kubernetes.io/projected/d861622f-ed9a-4709-824c-bb291c4639a5-kube-api-access-6ct4z\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.906973 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d861622f-ed9a-4709-824c-bb291c4639a5-etc-machine-id\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.907005 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-scripts\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.907038 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-config-data\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.907066 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-db-sync-config-data\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.907120 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-combined-ca-bundle\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.915215 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6xmvf"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.916567 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.917053 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d861622f-ed9a-4709-824c-bb291c4639a5-etc-machine-id\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.925862 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-config-data\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.929573 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-scripts\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.929709 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.929896 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7x9kp" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.929897 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.935878 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-db-sync-config-data\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.940411 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:16:46 crc kubenswrapper[4910]: I0226 22:16:46.943033 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-combined-ca-bundle\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.011454 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6xmvf"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015077 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-kube-api-access-zjgvm\") pod \"neutron-db-sync-6xmvf\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015135 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-config-data\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015174 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-combined-ca-bundle\") pod \"neutron-db-sync-6xmvf\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015196 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015226 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-scripts\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015253 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7fkk\" (UniqueName: \"kubernetes.io/projected/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-kube-api-access-r7fkk\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015295 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-config\") pod \"neutron-db-sync-6xmvf\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015363 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-log-httpd\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015383 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-run-httpd\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.015419 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.027729 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ct4z\" (UniqueName: \"kubernetes.io/projected/d861622f-ed9a-4709-824c-bb291c4639a5-kube-api-access-6ct4z\") pod \"cinder-db-sync-6xcs7\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.051347 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lxj26"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.070301 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.072652 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.082098 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.082228 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wh59c" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116697 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-log-httpd\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116730 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-run-httpd\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116756 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116815 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-kube-api-access-zjgvm\") pod \"neutron-db-sync-6xmvf\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116839 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-combined-ca-bundle\") pod \"neutron-db-sync-6xmvf\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116853 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-config-data\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116868 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116894 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-scripts\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116912 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7fkk\" (UniqueName: \"kubernetes.io/projected/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-kube-api-access-r7fkk\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.116945 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-config\") pod \"neutron-db-sync-6xmvf\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.126401 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-run-httpd\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.126630 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-log-httpd\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.126922 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lxj26"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.130255 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-scripts\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.136555 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.137322 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.141337 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-config\") pod \"neutron-db-sync-6xmvf\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.144404 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-config-data\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.144662 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-combined-ca-bundle\") pod \"neutron-db-sync-6xmvf\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.163270 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7fkk\" (UniqueName: \"kubernetes.io/projected/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-kube-api-access-r7fkk\") pod \"ceilometer-0\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.165279 4910 generic.go:334] "Generic (PLEG): container finished" podID="e42b4f12-ddac-4a7e-8fcf-6ce66da40085" containerID="1ddc8fe97c6e8d049929c276f532f5e90094b29fb205f36691dbacfec23444e6" exitCode=0 Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.165381 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t6mjz" event={"ID":"e42b4f12-ddac-4a7e-8fcf-6ce66da40085","Type":"ContainerDied","Data":"1ddc8fe97c6e8d049929c276f532f5e90094b29fb205f36691dbacfec23444e6"} Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.166066 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-kube-api-access-zjgvm\") pod \"neutron-db-sync-6xmvf\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.168336 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-68pwg"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.169630 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5ac37f44-e173-4927-b8ea-44741aa983c0","Type":"ContainerStarted","Data":"66c08f54d562250140640d9f601d277161d327c5e5a542a7b8aa1ed570abbe07"} Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.169765 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.173955 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.174265 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-2pm9n" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.174307 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.174271 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.177263 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" event={"ID":"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86","Type":"ContainerStarted","Data":"a27e80fb9aa72c36187abd7a286c5c33f48c81c60b997174786b4c7a2eafbe0a"} Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.177449 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" podUID="ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" containerName="dnsmasq-dns" containerID="cri-o://a27e80fb9aa72c36187abd7a286c5c33f48c81c60b997174786b4c7a2eafbe0a" gracePeriod=10 Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.177782 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.188992 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2hp89"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.190498 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.193766 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2mnfl" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.200228 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.203552 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.219046 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dccm\" (UniqueName: \"kubernetes.io/projected/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-kube-api-access-7dccm\") pod \"barbican-db-sync-lxj26\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.219510 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-combined-ca-bundle\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.219575 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-combined-ca-bundle\") pod \"barbican-db-sync-lxj26\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.219603 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-scripts\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.219756 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.220149 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-db-sync-config-data\") pod \"barbican-db-sync-lxj26\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.220244 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqn9g\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-kube-api-access-tqn9g\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.220491 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-certs\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.220613 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-config-data\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.224504 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-68pwg"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.247618 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2hp89"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.267133 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dkpbm"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.278772 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.288036 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56798b757f-2gk95"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.290941 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.299581 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-2gk95"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322353 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-config-data\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322404 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf7tx\" (UniqueName: \"kubernetes.io/projected/58f067fa-7653-4dd7-93ee-bef006c01109-kube-api-access-tf7tx\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322430 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dccm\" (UniqueName: \"kubernetes.io/projected/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-kube-api-access-7dccm\") pod \"barbican-db-sync-lxj26\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322464 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-combined-ca-bundle\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322503 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-config-data\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322527 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-scripts\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322552 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-combined-ca-bundle\") pod \"barbican-db-sync-lxj26\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322577 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-combined-ca-bundle\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322593 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-scripts\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.322618 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-db-sync-config-data\") pod \"barbican-db-sync-lxj26\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.323664 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqn9g\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-kube-api-access-tqn9g\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.323783 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f067fa-7653-4dd7-93ee-bef006c01109-logs\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.324016 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-certs\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.331019 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-scripts\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.331792 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-config-data\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.333222 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-db-sync-config-data\") pod \"barbican-db-sync-lxj26\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.335863 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-combined-ca-bundle\") pod \"barbican-db-sync-lxj26\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.337355 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-certs\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.341594 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-combined-ca-bundle\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.343241 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" podStartSLOduration=3.3431985810000002 podStartE2EDuration="3.343198581s" podCreationTimestamp="2026-02-26 22:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:47.233101144 +0000 UTC m=+1292.312591685" watchObservedRunningTime="2026-02-26 22:16:47.343198581 +0000 UTC m=+1292.422689122" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.346058 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dccm\" (UniqueName: \"kubernetes.io/projected/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-kube-api-access-7dccm\") pod \"barbican-db-sync-lxj26\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.346183 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqn9g\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-kube-api-access-tqn9g\") pod \"cloudkitty-db-sync-68pwg\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425480 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f067fa-7653-4dd7-93ee-bef006c01109-logs\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425751 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-config\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425791 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf7tx\" (UniqueName: \"kubernetes.io/projected/58f067fa-7653-4dd7-93ee-bef006c01109-kube-api-access-tf7tx\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425830 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425848 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425883 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-config-data\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425900 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-dns-svc\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425921 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-scripts\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425948 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gz6g\" (UniqueName: \"kubernetes.io/projected/86997c3a-d290-479d-8a29-aeb4fd60568d-kube-api-access-9gz6g\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.425965 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-combined-ca-bundle\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.426777 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f067fa-7653-4dd7-93ee-bef006c01109-logs\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.432862 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-combined-ca-bundle\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.433095 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-config-data\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.437251 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lxj26" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.439111 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-scripts\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.448821 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf7tx\" (UniqueName: \"kubernetes.io/projected/58f067fa-7653-4dd7-93ee-bef006c01109-kube-api-access-tf7tx\") pod \"placement-db-sync-2hp89\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.493747 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.515922 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2hp89" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.528059 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-config\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.529697 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-config\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.529777 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.529820 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.530744 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.530755 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.531007 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-dns-svc\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.531077 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gz6g\" (UniqueName: \"kubernetes.io/projected/86997c3a-d290-479d-8a29-aeb4fd60568d-kube-api-access-9gz6g\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.531372 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-dns-svc\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.552270 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gz6g\" (UniqueName: \"kubernetes.io/projected/86997c3a-d290-479d-8a29-aeb4fd60568d-kube-api-access-9gz6g\") pod \"dnsmasq-dns-56798b757f-2gk95\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.728406 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.731077 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.739347 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.747028 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wmnc7" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.747246 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.783595 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.836892 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.870781 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.870870 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.870912 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-logs\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.870943 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-config-data\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.870975 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.870991 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-scripts\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.871033 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwbs\" (UniqueName: \"kubernetes.io/projected/4eda553d-35a9-4df7-a9f0-984c213a2263-kube-api-access-knwbs\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.902208 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.903717 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.907988 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.930317 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.972796 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-config-data\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.972854 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.972897 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.972928 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-scripts\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.972954 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.972995 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knwbs\" (UniqueName: \"kubernetes.io/projected/4eda553d-35a9-4df7-a9f0-984c213a2263-kube-api-access-knwbs\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.973018 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.973044 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.973062 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.973098 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qszqb\" (UniqueName: \"kubernetes.io/projected/bcfb77bd-5556-45cf-a559-90cd9467d18d-kube-api-access-qszqb\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.973129 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-logs\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.973229 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.973263 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.973289 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-logs\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.973632 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-logs\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.974082 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.981893 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-config-data\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.982736 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-scripts\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.986800 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.995006 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:16:47 crc kubenswrapper[4910]: I0226 22:16:47.995042 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94049027790e22a18bf8e430a446734369cbf41eb4d29ad3f70f496aca7abf57/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.008793 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwbs\" (UniqueName: \"kubernetes.io/projected/4eda553d-35a9-4df7-a9f0-984c213a2263-kube-api-access-knwbs\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.014200 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6xcs7"] Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.040449 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dkpbm"] Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.062083 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xkqnq"] Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.062546 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.077855 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qszqb\" (UniqueName: \"kubernetes.io/projected/bcfb77bd-5556-45cf-a559-90cd9467d18d-kube-api-access-qszqb\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.077926 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-logs\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.077979 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.078024 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.078070 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.078112 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.078142 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.079240 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.078962 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-logs\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.081625 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.083465 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.086591 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.086625 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1fe3b589a0972626c208f995164ae337879055052c9895e085608499baca4b3/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.088909 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.187095 4910 generic.go:334] "Generic (PLEG): container finished" podID="ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" containerID="a27e80fb9aa72c36187abd7a286c5c33f48c81c60b997174786b4c7a2eafbe0a" exitCode=0 Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.187185 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" event={"ID":"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86","Type":"ContainerDied","Data":"a27e80fb9aa72c36187abd7a286c5c33f48c81c60b997174786b4c7a2eafbe0a"} Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.194984 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qszqb\" (UniqueName: \"kubernetes.io/projected/bcfb77bd-5556-45cf-a559-90cd9467d18d-kube-api-access-qszqb\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.286539 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6xmvf"] Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.310377 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2hp89"] Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.323145 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lxj26"] Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.342289 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.350320 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-68pwg"] Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.359651 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.624627 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.778437 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.832462 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.952092 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:48 crc kubenswrapper[4910]: I0226 22:16:48.959848 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.026718 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-dns-svc\") pod \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.027442 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-nb\") pod \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.027733 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-config\") pod \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.027847 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vnwv\" (UniqueName: \"kubernetes.io/projected/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-kube-api-access-2vnwv\") pod \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.027919 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-operator-scripts\") pod \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\" (UID: \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\") " Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.028197 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lfbf\" (UniqueName: \"kubernetes.io/projected/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-kube-api-access-4lfbf\") pod \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\" (UID: \"e42b4f12-ddac-4a7e-8fcf-6ce66da40085\") " Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.028282 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-sb\") pod \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\" (UID: \"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86\") " Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.031523 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e42b4f12-ddac-4a7e-8fcf-6ce66da40085" (UID: "e42b4f12-ddac-4a7e-8fcf-6ce66da40085"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.045119 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-kube-api-access-4lfbf" (OuterVolumeSpecName: "kube-api-access-4lfbf") pod "e42b4f12-ddac-4a7e-8fcf-6ce66da40085" (UID: "e42b4f12-ddac-4a7e-8fcf-6ce66da40085"). InnerVolumeSpecName "kube-api-access-4lfbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.046150 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-kube-api-access-2vnwv" (OuterVolumeSpecName: "kube-api-access-2vnwv") pod "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" (UID: "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86"). InnerVolumeSpecName "kube-api-access-2vnwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.131871 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lfbf\" (UniqueName: \"kubernetes.io/projected/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-kube-api-access-4lfbf\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.131902 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vnwv\" (UniqueName: \"kubernetes.io/projected/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-kube-api-access-2vnwv\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.131916 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e42b4f12-ddac-4a7e-8fcf-6ce66da40085-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.233029 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6xmvf" event={"ID":"0474fe2b-3094-4fd5-8f3a-1e9124acb82a","Type":"ContainerStarted","Data":"c7a9e41eeddc6ead906750b1d93dffab8eec0636b3ff6fef26234ad2833f95e5"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.240082 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t6mjz" event={"ID":"e42b4f12-ddac-4a7e-8fcf-6ce66da40085","Type":"ContainerDied","Data":"d051d0112db8598588fca3ca861a170485a326ea64d5811add07b1ba8c4d21b5"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.240117 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d051d0112db8598588fca3ca861a170485a326ea64d5811add07b1ba8c4d21b5" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.240236 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t6mjz" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.250104 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2hp89" event={"ID":"58f067fa-7653-4dd7-93ee-bef006c01109","Type":"ContainerStarted","Data":"b9bfe0da523e64440bf1ff147295a224e8dbd4a9415da4a4b06564542057b79b"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.257074 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xkqnq" event={"ID":"d9559b53-2f73-4123-8bd9-0848adca6e34","Type":"ContainerStarted","Data":"2f7da60c5c65b5fc65ff6f486756c8fccc0a679eef4d452e76ce6c6ec2178e98"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.259904 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-68pwg" event={"ID":"865f4842-373e-4bc9-98cd-4ceabb03b9f9","Type":"ContainerStarted","Data":"69d91b00e388791c004ff8e0082135b75f8b7013cc4f8799fa43a16da492e212"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.262642 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lxj26" event={"ID":"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75","Type":"ContainerStarted","Data":"ccf59614b993c1f801c733ebb7e16e88b98d719f55aec99a3ffbb826eea396f2"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.269227 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" event={"ID":"097aefae-b601-4bd6-83d7-0fc82ae942c1","Type":"ContainerStarted","Data":"dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.269262 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" event={"ID":"097aefae-b601-4bd6-83d7-0fc82ae942c1","Type":"ContainerStarted","Data":"64799c9395dd6fdad90b9cd1de0ef076c40b629f9e0eb03e8f46732ad18acbe0"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.271313 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerStarted","Data":"d1217d94288a2313f0e7f47672dcd1ee7008213c80c539903f93526ac05c745e"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.294861 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.312950 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.313642 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-6bh9x" event={"ID":"ebd82dda-1c7e-4f54-9d65-2bd8921fbf86","Type":"ContainerDied","Data":"f518f60bffb99c7cf6e9eecbd0db5ff00b4119b8235e601ab28816e87fd3c774"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.313713 4910 scope.go:117] "RemoveContainer" containerID="a27e80fb9aa72c36187abd7a286c5c33f48c81c60b997174786b4c7a2eafbe0a" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.340224 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6xcs7" event={"ID":"d861622f-ed9a-4709-824c-bb291c4639a5","Type":"ContainerStarted","Data":"f03fe6e62667a56206e77cab0cf82510f09834dda185233da06368b1cfb97279"} Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.412959 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" (UID: "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.423577 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-2gk95"] Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.446495 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" (UID: "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.450891 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" (UID: "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.450871 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.455528 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-config" (OuterVolumeSpecName: "config") pod "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" (UID: "ebd82dda-1c7e-4f54-9d65-2bd8921fbf86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:49 crc kubenswrapper[4910]: W0226 22:16:49.477603 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86997c3a_d290_479d_8a29_aeb4fd60568d.slice/crio-cdc6d6d7283eb7718a7990de08d617579b375f214fecff6aeb3a166c81dff8b3 WatchSource:0}: Error finding container cdc6d6d7283eb7718a7990de08d617579b375f214fecff6aeb3a166c81dff8b3: Status 404 returned error can't find the container with id cdc6d6d7283eb7718a7990de08d617579b375f214fecff6aeb3a166c81dff8b3 Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.566329 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.566584 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.566665 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.716287 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-6bh9x"] Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.804931 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-6bh9x"] Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.817244 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.838243 4910 scope.go:117] "RemoveContainer" containerID="f489371b117fa1a4cc97bdea7fc723995c555b4def0399052eac3827164dd2d9" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.950980 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" path="/var/lib/kubelet/pods/ebd82dda-1c7e-4f54-9d65-2bd8921fbf86/volumes" Feb 26 22:16:49 crc kubenswrapper[4910]: I0226 22:16:49.952223 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.000039 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.091011 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-config\") pod \"097aefae-b601-4bd6-83d7-0fc82ae942c1\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.091246 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-nb\") pod \"097aefae-b601-4bd6-83d7-0fc82ae942c1\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.091289 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-dns-svc\") pod \"097aefae-b601-4bd6-83d7-0fc82ae942c1\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.091310 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xssqm\" (UniqueName: \"kubernetes.io/projected/097aefae-b601-4bd6-83d7-0fc82ae942c1-kube-api-access-xssqm\") pod \"097aefae-b601-4bd6-83d7-0fc82ae942c1\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.091383 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-sb\") pod \"097aefae-b601-4bd6-83d7-0fc82ae942c1\" (UID: \"097aefae-b601-4bd6-83d7-0fc82ae942c1\") " Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.113330 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.122215 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097aefae-b601-4bd6-83d7-0fc82ae942c1-kube-api-access-xssqm" (OuterVolumeSpecName: "kube-api-access-xssqm") pod "097aefae-b601-4bd6-83d7-0fc82ae942c1" (UID: "097aefae-b601-4bd6-83d7-0fc82ae942c1"). InnerVolumeSpecName "kube-api-access-xssqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.166722 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-config" (OuterVolumeSpecName: "config") pod "097aefae-b601-4bd6-83d7-0fc82ae942c1" (UID: "097aefae-b601-4bd6-83d7-0fc82ae942c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.195186 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.195213 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xssqm\" (UniqueName: \"kubernetes.io/projected/097aefae-b601-4bd6-83d7-0fc82ae942c1-kube-api-access-xssqm\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.210882 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "097aefae-b601-4bd6-83d7-0fc82ae942c1" (UID: "097aefae-b601-4bd6-83d7-0fc82ae942c1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.210933 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "097aefae-b601-4bd6-83d7-0fc82ae942c1" (UID: "097aefae-b601-4bd6-83d7-0fc82ae942c1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.234584 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "097aefae-b601-4bd6-83d7-0fc82ae942c1" (UID: "097aefae-b601-4bd6-83d7-0fc82ae942c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.297473 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.297508 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.297517 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/097aefae-b601-4bd6-83d7-0fc82ae942c1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.350843 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.483938 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xkqnq" event={"ID":"d9559b53-2f73-4123-8bd9-0848adca6e34","Type":"ContainerStarted","Data":"c5c8fc95e5d919a36830541cfe3dbe1a28cd76416cc1c384d7175ddc6a0c0653"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.518760 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5ac37f44-e173-4927-b8ea-44741aa983c0","Type":"ContainerStarted","Data":"345f90ddbaa85b12804464a0a4d6fa5140937eeccda0d328ea3f03fc725133dc"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.547349 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xkqnq" podStartSLOduration=4.547334147 podStartE2EDuration="4.547334147s" podCreationTimestamp="2026-02-26 22:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:50.530593681 +0000 UTC m=+1295.610084222" watchObservedRunningTime="2026-02-26 22:16:50.547334147 +0000 UTC m=+1295.626824688" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.549310 4910 generic.go:334] "Generic (PLEG): container finished" podID="097aefae-b601-4bd6-83d7-0fc82ae942c1" containerID="dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5" exitCode=0 Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.549373 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" event={"ID":"097aefae-b601-4bd6-83d7-0fc82ae942c1","Type":"ContainerDied","Data":"dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.549401 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" event={"ID":"097aefae-b601-4bd6-83d7-0fc82ae942c1","Type":"ContainerDied","Data":"64799c9395dd6fdad90b9cd1de0ef076c40b629f9e0eb03e8f46732ad18acbe0"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.549420 4910 scope.go:117] "RemoveContainer" containerID="dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.549527 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dkpbm" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.602269 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-2gk95" event={"ID":"86997c3a-d290-479d-8a29-aeb4fd60568d","Type":"ContainerStarted","Data":"cdc6d6d7283eb7718a7990de08d617579b375f214fecff6aeb3a166c81dff8b3"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.692900 4910 scope.go:117] "RemoveContainer" containerID="dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5" Feb 26 22:16:50 crc kubenswrapper[4910]: E0226 22:16:50.702541 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5\": container with ID starting with dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5 not found: ID does not exist" containerID="dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.702603 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5"} err="failed to get container status \"dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5\": rpc error: code = NotFound desc = could not find container \"dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5\": container with ID starting with dffd6aea1ee29e0b8261b525b2588700140a5bf6071f2ef86c0fed9337577dc5 not found: ID does not exist" Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.723234 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dkpbm"] Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.729096 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dkpbm"] Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.731218 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"7b8d1c022876228f5bb6963f87a399d6ad5628d449f71e3dcc262ae444ec3d42"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.731264 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"f3490e22be193a4a7bf0ac42658676a6f790be3fd5453be40e22ce5b65724547"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.733949 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4eda553d-35a9-4df7-a9f0-984c213a2263","Type":"ContainerStarted","Data":"f19be5fa7156e70b3e5108705e81e152e6407b3615ad322a3d5613d732a3e15c"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.754500 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bcfb77bd-5556-45cf-a559-90cd9467d18d","Type":"ContainerStarted","Data":"234a6943aaa35b51fd3dbff4c41fd1fdba5cc64857c1753271769be479b28055"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.759018 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6xmvf" event={"ID":"0474fe2b-3094-4fd5-8f3a-1e9124acb82a","Type":"ContainerStarted","Data":"13844c402da3269ec4a9b09ca8a1b0440abcd52e0735173e624113f4963b8379"} Feb 26 22:16:50 crc kubenswrapper[4910]: I0226 22:16:50.801491 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6xmvf" podStartSLOduration=4.801471555 podStartE2EDuration="4.801471555s" podCreationTimestamp="2026-02-26 22:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:50.774126123 +0000 UTC m=+1295.853616664" watchObservedRunningTime="2026-02-26 22:16:50.801471555 +0000 UTC m=+1295.880962096" Feb 26 22:16:51 crc kubenswrapper[4910]: I0226 22:16:51.776794 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bcfb77bd-5556-45cf-a559-90cd9467d18d","Type":"ContainerStarted","Data":"68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887"} Feb 26 22:16:51 crc kubenswrapper[4910]: I0226 22:16:51.797736 4910 generic.go:334] "Generic (PLEG): container finished" podID="86997c3a-d290-479d-8a29-aeb4fd60568d" containerID="4989b94f9e0425fe4eb8d729805fcf4e90b5507203da60a5fda5c897ec293055" exitCode=0 Feb 26 22:16:51 crc kubenswrapper[4910]: I0226 22:16:51.797822 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-2gk95" event={"ID":"86997c3a-d290-479d-8a29-aeb4fd60568d","Type":"ContainerDied","Data":"4989b94f9e0425fe4eb8d729805fcf4e90b5507203da60a5fda5c897ec293055"} Feb 26 22:16:51 crc kubenswrapper[4910]: I0226 22:16:51.815179 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"dda0bf750b54f4fa9a2578c8462d943a4e15d98726c4740165073d0ec7d64eff"} Feb 26 22:16:51 crc kubenswrapper[4910]: I0226 22:16:51.815211 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"788ed7a2788b0ac998e24925071ea3e037e36893f18c70e3e40808a960383c23"} Feb 26 22:16:51 crc kubenswrapper[4910]: I0226 22:16:51.815221 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"6df98ee39c526067a3294472a89ffef9a95c78fbfa1730814976e03df95fdbd4"} Feb 26 22:16:51 crc kubenswrapper[4910]: I0226 22:16:51.857390 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4eda553d-35a9-4df7-a9f0-984c213a2263","Type":"ContainerStarted","Data":"45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981"} Feb 26 22:16:51 crc kubenswrapper[4910]: I0226 22:16:51.975017 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097aefae-b601-4bd6-83d7-0fc82ae942c1" path="/var/lib/kubelet/pods/097aefae-b601-4bd6-83d7-0fc82ae942c1/volumes" Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.871640 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bcfb77bd-5556-45cf-a559-90cd9467d18d","Type":"ContainerStarted","Data":"d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2"} Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.871784 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerName="glance-log" containerID="cri-o://68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887" gracePeriod=30 Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.871894 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerName="glance-httpd" containerID="cri-o://d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2" gracePeriod=30 Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.884708 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-2gk95" event={"ID":"86997c3a-d290-479d-8a29-aeb4fd60568d","Type":"ContainerStarted","Data":"549457cc5e302a6426937e9b7616fa882281cd65e6883f98865915f81aae152c"} Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.884977 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.915950 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"b411b3abc33ae5fea93943d5bac3403addae029aee1fb2340ff8767cf81a19ea"} Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.916208 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"30b027eb-e942-4121-aebc-776d616b902e","Type":"ContainerStarted","Data":"f67cc42010762361e157955cc37dd0c9c9e57c92a507b672006f1f9eb4b21042"} Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.919438 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4eda553d-35a9-4df7-a9f0-984c213a2263","Type":"ContainerStarted","Data":"974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80"} Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.919558 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerName="glance-log" containerID="cri-o://45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981" gracePeriod=30 Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.919862 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerName="glance-httpd" containerID="cri-o://974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80" gracePeriod=30 Feb 26 22:16:52 crc kubenswrapper[4910]: I0226 22:16:52.932671 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.932656023 podStartE2EDuration="6.932656023s" podCreationTimestamp="2026-02-26 22:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:52.896414374 +0000 UTC m=+1297.975904925" watchObservedRunningTime="2026-02-26 22:16:52.932656023 +0000 UTC m=+1298.012146564" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.003882 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=42.047658875 podStartE2EDuration="50.003866026s" podCreationTimestamp="2026-02-26 22:16:03 +0000 UTC" firstStartedPulling="2026-02-26 22:16:40.760715334 +0000 UTC m=+1285.840205875" lastFinishedPulling="2026-02-26 22:16:48.716922485 +0000 UTC m=+1293.796413026" observedRunningTime="2026-02-26 22:16:52.999407561 +0000 UTC m=+1298.078898102" watchObservedRunningTime="2026-02-26 22:16:53.003866026 +0000 UTC m=+1298.083356557" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.006977 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56798b757f-2gk95" podStartSLOduration=6.006966733 podStartE2EDuration="6.006966733s" podCreationTimestamp="2026-02-26 22:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:52.931293105 +0000 UTC m=+1298.010783646" watchObservedRunningTime="2026-02-26 22:16:53.006966733 +0000 UTC m=+1298.086457274" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.029147 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.02912779 podStartE2EDuration="7.02912779s" podCreationTimestamp="2026-02-26 22:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:16:53.020664514 +0000 UTC m=+1298.100155065" watchObservedRunningTime="2026-02-26 22:16:53.02912779 +0000 UTC m=+1298.108618331" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.271860 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-2gk95"] Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.322184 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lmc2p"] Feb 26 22:16:53 crc kubenswrapper[4910]: E0226 22:16:53.322729 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42b4f12-ddac-4a7e-8fcf-6ce66da40085" containerName="mariadb-account-create-update" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.322746 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42b4f12-ddac-4a7e-8fcf-6ce66da40085" containerName="mariadb-account-create-update" Feb 26 22:16:53 crc kubenswrapper[4910]: E0226 22:16:53.322757 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" containerName="dnsmasq-dns" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.322765 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" containerName="dnsmasq-dns" Feb 26 22:16:53 crc kubenswrapper[4910]: E0226 22:16:53.322818 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" containerName="init" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.322827 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" containerName="init" Feb 26 22:16:53 crc kubenswrapper[4910]: E0226 22:16:53.322844 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097aefae-b601-4bd6-83d7-0fc82ae942c1" containerName="init" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.322855 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="097aefae-b601-4bd6-83d7-0fc82ae942c1" containerName="init" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.323096 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd82dda-1c7e-4f54-9d65-2bd8921fbf86" containerName="dnsmasq-dns" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.323114 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42b4f12-ddac-4a7e-8fcf-6ce66da40085" containerName="mariadb-account-create-update" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.323131 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="097aefae-b601-4bd6-83d7-0fc82ae942c1" containerName="init" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.325209 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.335120 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.349904 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lmc2p"] Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.494246 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.494599 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6g8\" (UniqueName: \"kubernetes.io/projected/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-kube-api-access-ps6g8\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.494629 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-config\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.494648 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.494706 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.494724 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.597469 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.597515 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.597595 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.597639 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6g8\" (UniqueName: \"kubernetes.io/projected/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-kube-api-access-ps6g8\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.597669 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-config\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.597686 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.598629 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.598988 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.599645 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-config\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.599736 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.600498 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.618414 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.627845 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6g8\" (UniqueName: \"kubernetes.io/projected/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-kube-api-access-ps6g8\") pod \"dnsmasq-dns-56df8fb6b7-lmc2p\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.658794 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.680482 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.699329 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"bcfb77bd-5556-45cf-a559-90cd9467d18d\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.699523 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-config-data\") pod \"bcfb77bd-5556-45cf-a559-90cd9467d18d\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.699594 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-scripts\") pod \"bcfb77bd-5556-45cf-a559-90cd9467d18d\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.699663 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-httpd-run\") pod \"bcfb77bd-5556-45cf-a559-90cd9467d18d\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.699696 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-logs\") pod \"bcfb77bd-5556-45cf-a559-90cd9467d18d\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.700209 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bcfb77bd-5556-45cf-a559-90cd9467d18d" (UID: "bcfb77bd-5556-45cf-a559-90cd9467d18d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.700262 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qszqb\" (UniqueName: \"kubernetes.io/projected/bcfb77bd-5556-45cf-a559-90cd9467d18d-kube-api-access-qszqb\") pod \"bcfb77bd-5556-45cf-a559-90cd9467d18d\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.700319 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-combined-ca-bundle\") pod \"bcfb77bd-5556-45cf-a559-90cd9467d18d\" (UID: \"bcfb77bd-5556-45cf-a559-90cd9467d18d\") " Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.700649 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-logs" (OuterVolumeSpecName: "logs") pod "bcfb77bd-5556-45cf-a559-90cd9467d18d" (UID: "bcfb77bd-5556-45cf-a559-90cd9467d18d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.701841 4910 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.701861 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcfb77bd-5556-45cf-a559-90cd9467d18d-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.713544 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcfb77bd-5556-45cf-a559-90cd9467d18d-kube-api-access-qszqb" (OuterVolumeSpecName: "kube-api-access-qszqb") pod "bcfb77bd-5556-45cf-a559-90cd9467d18d" (UID: "bcfb77bd-5556-45cf-a559-90cd9467d18d"). InnerVolumeSpecName "kube-api-access-qszqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.738912 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcfb77bd-5556-45cf-a559-90cd9467d18d" (UID: "bcfb77bd-5556-45cf-a559-90cd9467d18d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.740572 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-scripts" (OuterVolumeSpecName: "scripts") pod "bcfb77bd-5556-45cf-a559-90cd9467d18d" (UID: "bcfb77bd-5556-45cf-a559-90cd9467d18d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.775034 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0" (OuterVolumeSpecName: "glance") pod "bcfb77bd-5556-45cf-a559-90cd9467d18d" (UID: "bcfb77bd-5556-45cf-a559-90cd9467d18d"). InnerVolumeSpecName "pvc-2ef62248-3f7a-4c99-851b-abb253e36db0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.798633 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-config-data" (OuterVolumeSpecName: "config-data") pod "bcfb77bd-5556-45cf-a559-90cd9467d18d" (UID: "bcfb77bd-5556-45cf-a559-90cd9467d18d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.808760 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") on node \"crc\" " Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.810147 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.810322 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.810349 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qszqb\" (UniqueName: \"kubernetes.io/projected/bcfb77bd-5556-45cf-a559-90cd9467d18d-kube-api-access-qszqb\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.810360 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcfb77bd-5556-45cf-a559-90cd9467d18d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.864844 4910 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.865133 4910 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2ef62248-3f7a-4c99-851b-abb253e36db0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0") on node "crc" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.912098 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:16:53 crc kubenswrapper[4910]: I0226 22:16:53.917214 4910 reconciler_common.go:293] "Volume detached for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.025470 4910 generic.go:334] "Generic (PLEG): container finished" podID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerID="d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2" exitCode=143 Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.025544 4910 generic.go:334] "Generic (PLEG): container finished" podID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerID="68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887" exitCode=143 Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.025760 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.026494 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bcfb77bd-5556-45cf-a559-90cd9467d18d","Type":"ContainerDied","Data":"d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2"} Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.026541 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bcfb77bd-5556-45cf-a559-90cd9467d18d","Type":"ContainerDied","Data":"68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887"} Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.026553 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bcfb77bd-5556-45cf-a559-90cd9467d18d","Type":"ContainerDied","Data":"234a6943aaa35b51fd3dbff4c41fd1fdba5cc64857c1753271769be479b28055"} Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.026568 4910 scope.go:117] "RemoveContainer" containerID="d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.027518 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-scripts\") pod \"4eda553d-35a9-4df7-a9f0-984c213a2263\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.027570 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knwbs\" (UniqueName: \"kubernetes.io/projected/4eda553d-35a9-4df7-a9f0-984c213a2263-kube-api-access-knwbs\") pod \"4eda553d-35a9-4df7-a9f0-984c213a2263\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.027603 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-combined-ca-bundle\") pod \"4eda553d-35a9-4df7-a9f0-984c213a2263\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.027757 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"4eda553d-35a9-4df7-a9f0-984c213a2263\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.027795 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-httpd-run\") pod \"4eda553d-35a9-4df7-a9f0-984c213a2263\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.027813 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-logs\") pod \"4eda553d-35a9-4df7-a9f0-984c213a2263\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.027845 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-config-data\") pod \"4eda553d-35a9-4df7-a9f0-984c213a2263\" (UID: \"4eda553d-35a9-4df7-a9f0-984c213a2263\") " Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.030014 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4eda553d-35a9-4df7-a9f0-984c213a2263" (UID: "4eda553d-35a9-4df7-a9f0-984c213a2263"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.030212 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-logs" (OuterVolumeSpecName: "logs") pod "4eda553d-35a9-4df7-a9f0-984c213a2263" (UID: "4eda553d-35a9-4df7-a9f0-984c213a2263"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.041669 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eda553d-35a9-4df7-a9f0-984c213a2263-kube-api-access-knwbs" (OuterVolumeSpecName: "kube-api-access-knwbs") pod "4eda553d-35a9-4df7-a9f0-984c213a2263" (UID: "4eda553d-35a9-4df7-a9f0-984c213a2263"). InnerVolumeSpecName "kube-api-access-knwbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.042075 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-scripts" (OuterVolumeSpecName: "scripts") pod "4eda553d-35a9-4df7-a9f0-984c213a2263" (UID: "4eda553d-35a9-4df7-a9f0-984c213a2263"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.042609 4910 generic.go:334] "Generic (PLEG): container finished" podID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerID="974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80" exitCode=0 Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.042639 4910 generic.go:334] "Generic (PLEG): container finished" podID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerID="45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981" exitCode=143 Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.043699 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.044751 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4eda553d-35a9-4df7-a9f0-984c213a2263","Type":"ContainerDied","Data":"974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80"} Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.044828 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4eda553d-35a9-4df7-a9f0-984c213a2263","Type":"ContainerDied","Data":"45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981"} Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.044845 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4eda553d-35a9-4df7-a9f0-984c213a2263","Type":"ContainerDied","Data":"f19be5fa7156e70b3e5108705e81e152e6407b3615ad322a3d5613d732a3e15c"} Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.070548 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e" (OuterVolumeSpecName: "glance") pod "4eda553d-35a9-4df7-a9f0-984c213a2263" (UID: "4eda553d-35a9-4df7-a9f0-984c213a2263"). InnerVolumeSpecName "pvc-07357e7e-76cc-49df-b0f0-87819efba45e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.080657 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4eda553d-35a9-4df7-a9f0-984c213a2263" (UID: "4eda553d-35a9-4df7-a9f0-984c213a2263"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.120973 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.121460 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-config-data" (OuterVolumeSpecName: "config-data") pod "4eda553d-35a9-4df7-a9f0-984c213a2263" (UID: "4eda553d-35a9-4df7-a9f0-984c213a2263"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.133505 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.133535 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knwbs\" (UniqueName: \"kubernetes.io/projected/4eda553d-35a9-4df7-a9f0-984c213a2263-kube-api-access-knwbs\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.133546 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.133567 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") on node \"crc\" " Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.133579 4910 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.133590 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eda553d-35a9-4df7-a9f0-984c213a2263-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.133598 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eda553d-35a9-4df7-a9f0-984c213a2263-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.139422 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.139678 4910 scope.go:117] "RemoveContainer" containerID="68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.172449 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:16:54 crc kubenswrapper[4910]: E0226 22:16:54.172925 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerName="glance-httpd" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.172941 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerName="glance-httpd" Feb 26 22:16:54 crc kubenswrapper[4910]: E0226 22:16:54.172953 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerName="glance-log" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.172959 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerName="glance-log" Feb 26 22:16:54 crc kubenswrapper[4910]: E0226 22:16:54.172979 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerName="glance-log" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.172989 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerName="glance-log" Feb 26 22:16:54 crc kubenswrapper[4910]: E0226 22:16:54.173002 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerName="glance-httpd" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.173009 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerName="glance-httpd" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.173805 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerName="glance-log" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.173830 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcfb77bd-5556-45cf-a559-90cd9467d18d" containerName="glance-httpd" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.173842 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerName="glance-log" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.173873 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda553d-35a9-4df7-a9f0-984c213a2263" containerName="glance-httpd" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.174981 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.179217 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.184509 4910 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.184640 4910 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-07357e7e-76cc-49df-b0f0-87819efba45e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e") on node "crc" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.190432 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.198148 4910 scope.go:117] "RemoveContainer" containerID="d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2" Feb 26 22:16:54 crc kubenswrapper[4910]: E0226 22:16:54.213548 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2\": container with ID starting with d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2 not found: ID does not exist" containerID="d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.213592 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2"} err="failed to get container status \"d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2\": rpc error: code = NotFound desc = could not find container \"d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2\": container with ID starting with d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2 not found: ID does not exist" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.213618 4910 scope.go:117] "RemoveContainer" containerID="68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887" Feb 26 22:16:54 crc kubenswrapper[4910]: E0226 22:16:54.215109 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887\": container with ID starting with 68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887 not found: ID does not exist" containerID="68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.215139 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887"} err="failed to get container status \"68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887\": rpc error: code = NotFound desc = could not find container \"68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887\": container with ID starting with 68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887 not found: ID does not exist" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.215203 4910 scope.go:117] "RemoveContainer" containerID="d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.216449 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2"} err="failed to get container status \"d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2\": rpc error: code = NotFound desc = could not find container \"d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2\": container with ID starting with d16ad1a1179a31c744e1cd95acf28d87bce77cf36a0f9b96aba01768db6a42e2 not found: ID does not exist" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.216470 4910 scope.go:117] "RemoveContainer" containerID="68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.216757 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887"} err="failed to get container status \"68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887\": rpc error: code = NotFound desc = could not find container \"68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887\": container with ID starting with 68620488bcbdac49f250cb3622234ea718a02a33a24a5f60adfa64f62bffe887 not found: ID does not exist" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.216770 4910 scope.go:117] "RemoveContainer" containerID="974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.234880 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5wn7\" (UniqueName: \"kubernetes.io/projected/71912b57-eb09-4973-aca5-4aec7d7d8fb5-kube-api-access-w5wn7\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.235206 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.235295 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.235393 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.235595 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.235691 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-logs\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.235795 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.235904 4910 reconciler_common.go:293] "Volume detached for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.257540 4910 scope.go:117] "RemoveContainer" containerID="45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.285864 4910 scope.go:117] "RemoveContainer" containerID="974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80" Feb 26 22:16:54 crc kubenswrapper[4910]: E0226 22:16:54.290334 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80\": container with ID starting with 974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80 not found: ID does not exist" containerID="974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.290374 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80"} err="failed to get container status \"974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80\": rpc error: code = NotFound desc = could not find container \"974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80\": container with ID starting with 974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80 not found: ID does not exist" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.290402 4910 scope.go:117] "RemoveContainer" containerID="45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981" Feb 26 22:16:54 crc kubenswrapper[4910]: E0226 22:16:54.291190 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981\": container with ID starting with 45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981 not found: ID does not exist" containerID="45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.291211 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981"} err="failed to get container status \"45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981\": rpc error: code = NotFound desc = could not find container \"45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981\": container with ID starting with 45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981 not found: ID does not exist" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.291224 4910 scope.go:117] "RemoveContainer" containerID="974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.293892 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80"} err="failed to get container status \"974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80\": rpc error: code = NotFound desc = could not find container \"974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80\": container with ID starting with 974b70dcec426d3c956dbc51a4732a65351ea0d36ea18c730568840c9e7fbd80 not found: ID does not exist" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.293912 4910 scope.go:117] "RemoveContainer" containerID="45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.296224 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981"} err="failed to get container status \"45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981\": rpc error: code = NotFound desc = could not find container \"45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981\": container with ID starting with 45d53272197f3ee08c1edc053cfbb4f4e18ddb04b7c9db067c0d6acd0006e981 not found: ID does not exist" Feb 26 22:16:54 crc kubenswrapper[4910]: E0226 22:16:54.330414 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9559b53_2f73_4123_8bd9_0848adca6e34.slice/crio-conmon-c5c8fc95e5d919a36830541cfe3dbe1a28cd76416cc1c384d7175ddc6a0c0653.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcfb77bd_5556_45cf_a559_90cd9467d18d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcfb77bd_5556_45cf_a559_90cd9467d18d.slice/crio-234a6943aaa35b51fd3dbff4c41fd1fdba5cc64857c1753271769be479b28055\": RecentStats: unable to find data in memory cache]" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.340240 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-logs\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.340371 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.340478 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5wn7\" (UniqueName: \"kubernetes.io/projected/71912b57-eb09-4973-aca5-4aec7d7d8fb5-kube-api-access-w5wn7\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.340663 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.340720 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.340763 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.340800 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.341032 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.343015 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-logs\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.343461 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.349521 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1fe3b589a0972626c208f995164ae337879055052c9895e085608499baca4b3/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.346570 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.347181 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.346786 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.360269 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5wn7\" (UniqueName: \"kubernetes.io/projected/71912b57-eb09-4973-aca5-4aec7d7d8fb5-kube-api-access-w5wn7\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.395463 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.406172 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.423527 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.433278 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.435633 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.439488 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.440599 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.500413 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lmc2p"] Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.514321 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.548822 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxm8p\" (UniqueName: \"kubernetes.io/projected/ed81c10e-0c5e-4483-b210-2d6df693f4f8-kube-api-access-kxm8p\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.549151 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.550502 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.550606 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.550685 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.550744 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.550941 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-logs\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.653246 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxm8p\" (UniqueName: \"kubernetes.io/projected/ed81c10e-0c5e-4483-b210-2d6df693f4f8-kube-api-access-kxm8p\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.653307 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.653340 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.653368 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.653403 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.653425 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.653462 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-logs\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.653957 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-logs\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.655464 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.655502 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94049027790e22a18bf8e430a446734369cbf41eb4d29ad3f70f496aca7abf57/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.656075 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.658102 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.659020 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.659899 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.674306 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxm8p\" (UniqueName: \"kubernetes.io/projected/ed81c10e-0c5e-4483-b210-2d6df693f4f8-kube-api-access-kxm8p\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.694614 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " pod="openstack/glance-default-external-api-0" Feb 26 22:16:54 crc kubenswrapper[4910]: I0226 22:16:54.837656 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.069564 4910 generic.go:334] "Generic (PLEG): container finished" podID="d9559b53-2f73-4123-8bd9-0848adca6e34" containerID="c5c8fc95e5d919a36830541cfe3dbe1a28cd76416cc1c384d7175ddc6a0c0653" exitCode=0 Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.069648 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xkqnq" event={"ID":"d9559b53-2f73-4123-8bd9-0848adca6e34","Type":"ContainerDied","Data":"c5c8fc95e5d919a36830541cfe3dbe1a28cd76416cc1c384d7175ddc6a0c0653"} Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.084906 4910 generic.go:334] "Generic (PLEG): container finished" podID="5ac37f44-e173-4927-b8ea-44741aa983c0" containerID="345f90ddbaa85b12804464a0a4d6fa5140937eeccda0d328ea3f03fc725133dc" exitCode=0 Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.085027 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5ac37f44-e173-4927-b8ea-44741aa983c0","Type":"ContainerDied","Data":"345f90ddbaa85b12804464a0a4d6fa5140937eeccda0d328ea3f03fc725133dc"} Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.085202 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56798b757f-2gk95" podUID="86997c3a-d290-479d-8a29-aeb4fd60568d" containerName="dnsmasq-dns" containerID="cri-o://549457cc5e302a6426937e9b7616fa882281cd65e6883f98865915f81aae152c" gracePeriod=10 Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.487335 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.540955 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.727650 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.727720 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.925900 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eda553d-35a9-4df7-a9f0-984c213a2263" path="/var/lib/kubelet/pods/4eda553d-35a9-4df7-a9f0-984c213a2263/volumes" Feb 26 22:16:55 crc kubenswrapper[4910]: I0226 22:16:55.926767 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcfb77bd-5556-45cf-a559-90cd9467d18d" path="/var/lib/kubelet/pods/bcfb77bd-5556-45cf-a559-90cd9467d18d/volumes" Feb 26 22:16:56 crc kubenswrapper[4910]: I0226 22:16:56.103760 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-2gk95" event={"ID":"86997c3a-d290-479d-8a29-aeb4fd60568d","Type":"ContainerDied","Data":"549457cc5e302a6426937e9b7616fa882281cd65e6883f98865915f81aae152c"} Feb 26 22:16:56 crc kubenswrapper[4910]: I0226 22:16:56.103588 4910 generic.go:334] "Generic (PLEG): container finished" podID="86997c3a-d290-479d-8a29-aeb4fd60568d" containerID="549457cc5e302a6426937e9b7616fa882281cd65e6883f98865915f81aae152c" exitCode=0 Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.286651 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.332415 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-config-data\") pod \"d9559b53-2f73-4123-8bd9-0848adca6e34\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.332475 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-combined-ca-bundle\") pod \"d9559b53-2f73-4123-8bd9-0848adca6e34\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.332755 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-fernet-keys\") pod \"d9559b53-2f73-4123-8bd9-0848adca6e34\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.332799 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-scripts\") pod \"d9559b53-2f73-4123-8bd9-0848adca6e34\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.333571 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nchq4\" (UniqueName: \"kubernetes.io/projected/d9559b53-2f73-4123-8bd9-0848adca6e34-kube-api-access-nchq4\") pod \"d9559b53-2f73-4123-8bd9-0848adca6e34\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.333641 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-credential-keys\") pod \"d9559b53-2f73-4123-8bd9-0848adca6e34\" (UID: \"d9559b53-2f73-4123-8bd9-0848adca6e34\") " Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.339301 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d9559b53-2f73-4123-8bd9-0848adca6e34" (UID: "d9559b53-2f73-4123-8bd9-0848adca6e34"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.339334 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-scripts" (OuterVolumeSpecName: "scripts") pod "d9559b53-2f73-4123-8bd9-0848adca6e34" (UID: "d9559b53-2f73-4123-8bd9-0848adca6e34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.349995 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9559b53-2f73-4123-8bd9-0848adca6e34-kube-api-access-nchq4" (OuterVolumeSpecName: "kube-api-access-nchq4") pod "d9559b53-2f73-4123-8bd9-0848adca6e34" (UID: "d9559b53-2f73-4123-8bd9-0848adca6e34"). InnerVolumeSpecName "kube-api-access-nchq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.354281 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d9559b53-2f73-4123-8bd9-0848adca6e34" (UID: "d9559b53-2f73-4123-8bd9-0848adca6e34"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.378327 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-config-data" (OuterVolumeSpecName: "config-data") pod "d9559b53-2f73-4123-8bd9-0848adca6e34" (UID: "d9559b53-2f73-4123-8bd9-0848adca6e34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.411243 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9559b53-2f73-4123-8bd9-0848adca6e34" (UID: "d9559b53-2f73-4123-8bd9-0848adca6e34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.436999 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nchq4\" (UniqueName: \"kubernetes.io/projected/d9559b53-2f73-4123-8bd9-0848adca6e34-kube-api-access-nchq4\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.437039 4910 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.437050 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.437065 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.437076 4910 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:57 crc kubenswrapper[4910]: I0226 22:16:57.437103 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9559b53-2f73-4123-8bd9-0848adca6e34-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.122417 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" event={"ID":"bc719d44-8187-45f8-80e0-b4e3daa9b1eb","Type":"ContainerStarted","Data":"d9b1274da351bd5265413adbb4bba5d6caeb6155e6d6bf215bd892d274ebc320"} Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.124581 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xkqnq" event={"ID":"d9559b53-2f73-4123-8bd9-0848adca6e34","Type":"ContainerDied","Data":"2f7da60c5c65b5fc65ff6f486756c8fccc0a679eef4d452e76ce6c6ec2178e98"} Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.124617 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f7da60c5c65b5fc65ff6f486756c8fccc0a679eef4d452e76ce6c6ec2178e98" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.124654 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xkqnq" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.490178 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xkqnq"] Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.506708 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xkqnq"] Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.572490 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cwz4m"] Feb 26 22:16:58 crc kubenswrapper[4910]: E0226 22:16:58.573635 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9559b53-2f73-4123-8bd9-0848adca6e34" containerName="keystone-bootstrap" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.573749 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9559b53-2f73-4123-8bd9-0848adca6e34" containerName="keystone-bootstrap" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.574052 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9559b53-2f73-4123-8bd9-0848adca6e34" containerName="keystone-bootstrap" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.575144 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.579987 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.581188 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.581591 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r29m2" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.582562 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.583710 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cwz4m"] Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.773578 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-credential-keys\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.773693 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-fernet-keys\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.773740 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8tkp\" (UniqueName: \"kubernetes.io/projected/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-kube-api-access-m8tkp\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.774100 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-config-data\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.774272 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-scripts\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.774427 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-combined-ca-bundle\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.876455 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-credential-keys\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.876516 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8tkp\" (UniqueName: \"kubernetes.io/projected/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-kube-api-access-m8tkp\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.876534 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-fernet-keys\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.876617 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-config-data\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.876661 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-scripts\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.876712 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-combined-ca-bundle\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.881981 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-credential-keys\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.883187 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-fernet-keys\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.884458 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-scripts\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.884807 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-config-data\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.903053 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8tkp\" (UniqueName: \"kubernetes.io/projected/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-kube-api-access-m8tkp\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.903260 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-combined-ca-bundle\") pod \"keystone-bootstrap-cwz4m\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:58 crc kubenswrapper[4910]: I0226 22:16:58.925200 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:16:59 crc kubenswrapper[4910]: I0226 22:16:59.912695 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9559b53-2f73-4123-8bd9-0848adca6e34" path="/var/lib/kubelet/pods/d9559b53-2f73-4123-8bd9-0848adca6e34/volumes" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.403652 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.552294 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-config\") pod \"86997c3a-d290-479d-8a29-aeb4fd60568d\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.552406 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gz6g\" (UniqueName: \"kubernetes.io/projected/86997c3a-d290-479d-8a29-aeb4fd60568d-kube-api-access-9gz6g\") pod \"86997c3a-d290-479d-8a29-aeb4fd60568d\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.552558 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-sb\") pod \"86997c3a-d290-479d-8a29-aeb4fd60568d\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.552782 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-dns-svc\") pod \"86997c3a-d290-479d-8a29-aeb4fd60568d\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.552865 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-nb\") pod \"86997c3a-d290-479d-8a29-aeb4fd60568d\" (UID: \"86997c3a-d290-479d-8a29-aeb4fd60568d\") " Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.558689 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86997c3a-d290-479d-8a29-aeb4fd60568d-kube-api-access-9gz6g" (OuterVolumeSpecName: "kube-api-access-9gz6g") pod "86997c3a-d290-479d-8a29-aeb4fd60568d" (UID: "86997c3a-d290-479d-8a29-aeb4fd60568d"). InnerVolumeSpecName "kube-api-access-9gz6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.604110 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "86997c3a-d290-479d-8a29-aeb4fd60568d" (UID: "86997c3a-d290-479d-8a29-aeb4fd60568d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.604470 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "86997c3a-d290-479d-8a29-aeb4fd60568d" (UID: "86997c3a-d290-479d-8a29-aeb4fd60568d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.605466 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-config" (OuterVolumeSpecName: "config") pod "86997c3a-d290-479d-8a29-aeb4fd60568d" (UID: "86997c3a-d290-479d-8a29-aeb4fd60568d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.611851 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86997c3a-d290-479d-8a29-aeb4fd60568d" (UID: "86997c3a-d290-479d-8a29-aeb4fd60568d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.656305 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.656348 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.656362 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.656375 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gz6g\" (UniqueName: \"kubernetes.io/projected/86997c3a-d290-479d-8a29-aeb4fd60568d-kube-api-access-9gz6g\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.656386 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86997c3a-d290-479d-8a29-aeb4fd60568d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:02 crc kubenswrapper[4910]: I0226 22:17:02.838679 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56798b757f-2gk95" podUID="86997c3a-d290-479d-8a29-aeb4fd60568d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 26 22:17:03 crc kubenswrapper[4910]: I0226 22:17:03.181031 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-2gk95" event={"ID":"86997c3a-d290-479d-8a29-aeb4fd60568d","Type":"ContainerDied","Data":"cdc6d6d7283eb7718a7990de08d617579b375f214fecff6aeb3a166c81dff8b3"} Feb 26 22:17:03 crc kubenswrapper[4910]: I0226 22:17:03.181084 4910 scope.go:117] "RemoveContainer" containerID="549457cc5e302a6426937e9b7616fa882281cd65e6883f98865915f81aae152c" Feb 26 22:17:03 crc kubenswrapper[4910]: I0226 22:17:03.181093 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-2gk95" Feb 26 22:17:03 crc kubenswrapper[4910]: I0226 22:17:03.217564 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-2gk95"] Feb 26 22:17:03 crc kubenswrapper[4910]: I0226 22:17:03.227608 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-2gk95"] Feb 26 22:17:03 crc kubenswrapper[4910]: I0226 22:17:03.919006 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86997c3a-d290-479d-8a29-aeb4fd60568d" path="/var/lib/kubelet/pods/86997c3a-d290-479d-8a29-aeb4fd60568d/volumes" Feb 26 22:17:09 crc kubenswrapper[4910]: I0226 22:17:09.299572 4910 generic.go:334] "Generic (PLEG): container finished" podID="0474fe2b-3094-4fd5-8f3a-1e9124acb82a" containerID="13844c402da3269ec4a9b09ca8a1b0440abcd52e0735173e624113f4963b8379" exitCode=0 Feb 26 22:17:09 crc kubenswrapper[4910]: I0226 22:17:09.299765 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6xmvf" event={"ID":"0474fe2b-3094-4fd5-8f3a-1e9124acb82a","Type":"ContainerDied","Data":"13844c402da3269ec4a9b09ca8a1b0440abcd52e0735173e624113f4963b8379"} Feb 26 22:17:13 crc kubenswrapper[4910]: E0226 22:17:13.272435 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 26 22:17:13 crc kubenswrapper[4910]: E0226 22:17:13.273080 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dccm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lxj26_openstack(eeb12d5b-0ec7-48d5-b1ef-9e378c030b75): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:17:13 crc kubenswrapper[4910]: E0226 22:17:13.274250 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lxj26" podUID="eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" Feb 26 22:17:13 crc kubenswrapper[4910]: E0226 22:17:13.340832 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-lxj26" podUID="eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" Feb 26 22:17:13 crc kubenswrapper[4910]: I0226 22:17:13.806660 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:17:13 crc kubenswrapper[4910]: I0226 22:17:13.901866 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-kube-api-access-zjgvm\") pod \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " Feb 26 22:17:13 crc kubenswrapper[4910]: I0226 22:17:13.901986 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-combined-ca-bundle\") pod \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " Feb 26 22:17:13 crc kubenswrapper[4910]: I0226 22:17:13.902022 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-config\") pod \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\" (UID: \"0474fe2b-3094-4fd5-8f3a-1e9124acb82a\") " Feb 26 22:17:13 crc kubenswrapper[4910]: I0226 22:17:13.908034 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-kube-api-access-zjgvm" (OuterVolumeSpecName: "kube-api-access-zjgvm") pod "0474fe2b-3094-4fd5-8f3a-1e9124acb82a" (UID: "0474fe2b-3094-4fd5-8f3a-1e9124acb82a"). InnerVolumeSpecName "kube-api-access-zjgvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:13 crc kubenswrapper[4910]: I0226 22:17:13.929702 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0474fe2b-3094-4fd5-8f3a-1e9124acb82a" (UID: "0474fe2b-3094-4fd5-8f3a-1e9124acb82a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:13 crc kubenswrapper[4910]: I0226 22:17:13.930591 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-config" (OuterVolumeSpecName: "config") pod "0474fe2b-3094-4fd5-8f3a-1e9124acb82a" (UID: "0474fe2b-3094-4fd5-8f3a-1e9124acb82a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:14 crc kubenswrapper[4910]: I0226 22:17:14.004443 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-kube-api-access-zjgvm\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:14 crc kubenswrapper[4910]: I0226 22:17:14.004477 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:14 crc kubenswrapper[4910]: I0226 22:17:14.004491 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0474fe2b-3094-4fd5-8f3a-1e9124acb82a-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:14 crc kubenswrapper[4910]: I0226 22:17:14.252800 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:17:14 crc kubenswrapper[4910]: I0226 22:17:14.352329 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6xmvf" event={"ID":"0474fe2b-3094-4fd5-8f3a-1e9124acb82a","Type":"ContainerDied","Data":"c7a9e41eeddc6ead906750b1d93dffab8eec0636b3ff6fef26234ad2833f95e5"} Feb 26 22:17:14 crc kubenswrapper[4910]: I0226 22:17:14.352370 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6xmvf" Feb 26 22:17:14 crc kubenswrapper[4910]: I0226 22:17:14.352381 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a9e41eeddc6ead906750b1d93dffab8eec0636b3ff6fef26234ad2833f95e5" Feb 26 22:17:14 crc kubenswrapper[4910]: I0226 22:17:14.956132 4910 scope.go:117] "RemoveContainer" containerID="4989b94f9e0425fe4eb8d729805fcf4e90b5507203da60a5fda5c897ec293055" Feb 26 22:17:14 crc kubenswrapper[4910]: E0226 22:17:14.986442 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 26 22:17:14 crc kubenswrapper[4910]: E0226 22:17:14.986863 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ct4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6xcs7_openstack(d861622f-ed9a-4709-824c-bb291c4639a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:17:14 crc kubenswrapper[4910]: E0226 22:17:14.988223 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6xcs7" podUID="d861622f-ed9a-4709-824c-bb291c4639a5" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.147263 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lmc2p"] Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.165219 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-799bb5d856-9h5p7"] Feb 26 22:17:15 crc kubenswrapper[4910]: E0226 22:17:15.165711 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0474fe2b-3094-4fd5-8f3a-1e9124acb82a" containerName="neutron-db-sync" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.165729 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0474fe2b-3094-4fd5-8f3a-1e9124acb82a" containerName="neutron-db-sync" Feb 26 22:17:15 crc kubenswrapper[4910]: E0226 22:17:15.165744 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86997c3a-d290-479d-8a29-aeb4fd60568d" containerName="init" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.165751 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="86997c3a-d290-479d-8a29-aeb4fd60568d" containerName="init" Feb 26 22:17:15 crc kubenswrapper[4910]: E0226 22:17:15.165780 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86997c3a-d290-479d-8a29-aeb4fd60568d" containerName="dnsmasq-dns" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.165787 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="86997c3a-d290-479d-8a29-aeb4fd60568d" containerName="dnsmasq-dns" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.166024 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="86997c3a-d290-479d-8a29-aeb4fd60568d" containerName="dnsmasq-dns" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.166039 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0474fe2b-3094-4fd5-8f3a-1e9124acb82a" containerName="neutron-db-sync" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.167329 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.170971 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.171019 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.171115 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.171225 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7x9kp" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.197786 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-799bb5d856-9h5p7"] Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.213324 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-pjt4v"] Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.214986 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.223214 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-pjt4v"] Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332299 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-httpd-config\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332355 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332384 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbd9\" (UniqueName: \"kubernetes.io/projected/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-kube-api-access-qhbd9\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332444 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7lh\" (UniqueName: \"kubernetes.io/projected/88d3a1d1-ec96-476e-80bc-ad3784a06411-kube-api-access-nm7lh\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332469 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-config\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332491 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332527 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-combined-ca-bundle\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332548 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332581 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-svc\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332602 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-ovndb-tls-certs\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.332620 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-config\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: E0226 22:17:15.387022 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-6xcs7" podUID="d861622f-ed9a-4709-824c-bb291c4639a5" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.433872 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.433946 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-combined-ca-bundle\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.433973 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434008 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-svc\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434031 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-ovndb-tls-certs\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434048 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-config\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434093 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-httpd-config\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434119 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434136 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbd9\" (UniqueName: \"kubernetes.io/projected/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-kube-api-access-qhbd9\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434200 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm7lh\" (UniqueName: \"kubernetes.io/projected/88d3a1d1-ec96-476e-80bc-ad3784a06411-kube-api-access-nm7lh\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434224 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-config\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434844 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.434887 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.435972 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-svc\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.436585 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-config\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.437688 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.439742 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-combined-ca-bundle\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.439985 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-config\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.440100 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-ovndb-tls-certs\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.440184 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-httpd-config\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.459820 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm7lh\" (UniqueName: \"kubernetes.io/projected/88d3a1d1-ec96-476e-80bc-ad3784a06411-kube-api-access-nm7lh\") pod \"neutron-799bb5d856-9h5p7\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.462882 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbd9\" (UniqueName: \"kubernetes.io/projected/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-kube-api-access-qhbd9\") pod \"dnsmasq-dns-6b7b667979-pjt4v\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.496942 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:15 crc kubenswrapper[4910]: I0226 22:17:15.543410 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.180050 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56765bc48f-nmqd6"] Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.182211 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.184324 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.185360 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.208325 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56765bc48f-nmqd6"] Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.269354 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-internal-tls-certs\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.269425 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-httpd-config\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.269616 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zfrp\" (UniqueName: \"kubernetes.io/projected/7088af71-8215-43a1-b8e9-a23d8ff28d96-kube-api-access-8zfrp\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.269769 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-public-tls-certs\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.269935 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-combined-ca-bundle\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.269973 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-ovndb-tls-certs\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.270354 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-config\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.372673 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-config\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.373434 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-internal-tls-certs\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.373458 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-httpd-config\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.373502 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zfrp\" (UniqueName: \"kubernetes.io/projected/7088af71-8215-43a1-b8e9-a23d8ff28d96-kube-api-access-8zfrp\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.373539 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-public-tls-certs\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.373587 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-combined-ca-bundle\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.373607 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-ovndb-tls-certs\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.379551 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-httpd-config\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.379796 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-public-tls-certs\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.380564 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-config\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.381644 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-internal-tls-certs\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.382953 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-ovndb-tls-certs\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.389539 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-combined-ca-bundle\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.392637 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zfrp\" (UniqueName: \"kubernetes.io/projected/7088af71-8215-43a1-b8e9-a23d8ff28d96-kube-api-access-8zfrp\") pod \"neutron-56765bc48f-nmqd6\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:17 crc kubenswrapper[4910]: I0226 22:17:17.508658 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:18 crc kubenswrapper[4910]: I0226 22:17:18.417064 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed81c10e-0c5e-4483-b210-2d6df693f4f8","Type":"ContainerStarted","Data":"9d5dac6e0bfa4065095432b20e81131e2d87bbf597ab1563391f5a03b59dabec"} Feb 26 22:17:18 crc kubenswrapper[4910]: I0226 22:17:18.431501 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5ac37f44-e173-4927-b8ea-44741aa983c0","Type":"ContainerStarted","Data":"9bb3e98b863d5170ec89e4f9320c3b43f886de5623dcbe4e8c96e128520ee15d"} Feb 26 22:17:18 crc kubenswrapper[4910]: I0226 22:17:18.582846 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cwz4m"] Feb 26 22:17:18 crc kubenswrapper[4910]: I0226 22:17:18.671391 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:17:21 crc kubenswrapper[4910]: I0226 22:17:21.491735 4910 generic.go:334] "Generic (PLEG): container finished" podID="bc719d44-8187-45f8-80e0-b4e3daa9b1eb" containerID="029c527d6b360940d71863eebe0a0053823ef5a084d4e5f74698eb488f9e349c" exitCode=0 Feb 26 22:17:21 crc kubenswrapper[4910]: I0226 22:17:21.492524 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" event={"ID":"bc719d44-8187-45f8-80e0-b4e3daa9b1eb","Type":"ContainerDied","Data":"029c527d6b360940d71863eebe0a0053823ef5a084d4e5f74698eb488f9e349c"} Feb 26 22:17:21 crc kubenswrapper[4910]: I0226 22:17:21.495826 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71912b57-eb09-4973-aca5-4aec7d7d8fb5","Type":"ContainerStarted","Data":"745a48b3325db1fdc1c832b7dfbbef7388a09167a9b593c3a6448d05407d6e7d"} Feb 26 22:17:21 crc kubenswrapper[4910]: I0226 22:17:21.499059 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cwz4m" event={"ID":"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f","Type":"ContainerStarted","Data":"c720ce29f22c9a0865b71409a79ebcfc8c676b2d84dbb8704fb78f2d16891bf4"} Feb 26 22:17:25 crc kubenswrapper[4910]: I0226 22:17:25.727298 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:17:25 crc kubenswrapper[4910]: I0226 22:17:25.728094 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.314641 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:17:29 crc kubenswrapper[4910]: E0226 22:17:29.348073 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 26 22:17:29 crc kubenswrapper[4910]: E0226 22:17:29.348117 4910 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 26 22:17:29 crc kubenswrapper[4910]: E0226 22:17:29.348250 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqn9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-68pwg_openstack(865f4842-373e-4bc9-98cd-4ceabb03b9f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:17:29 crc kubenswrapper[4910]: E0226 22:17:29.349581 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-68pwg" podUID="865f4842-373e-4bc9-98cd-4ceabb03b9f9" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.453707 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-nb\") pod \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.453866 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-config\") pod \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.453980 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps6g8\" (UniqueName: \"kubernetes.io/projected/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-kube-api-access-ps6g8\") pod \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.454025 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-swift-storage-0\") pod \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.454117 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-sb\") pod \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.454232 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-svc\") pod \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\" (UID: \"bc719d44-8187-45f8-80e0-b4e3daa9b1eb\") " Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.480529 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-kube-api-access-ps6g8" (OuterVolumeSpecName: "kube-api-access-ps6g8") pod "bc719d44-8187-45f8-80e0-b4e3daa9b1eb" (UID: "bc719d44-8187-45f8-80e0-b4e3daa9b1eb"). InnerVolumeSpecName "kube-api-access-ps6g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.560360 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps6g8\" (UniqueName: \"kubernetes.io/projected/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-kube-api-access-ps6g8\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.622543 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.622537 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lmc2p" event={"ID":"bc719d44-8187-45f8-80e0-b4e3daa9b1eb","Type":"ContainerDied","Data":"d9b1274da351bd5265413adbb4bba5d6caeb6155e6d6bf215bd892d274ebc320"} Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.622674 4910 scope.go:117] "RemoveContainer" containerID="029c527d6b360940d71863eebe0a0053823ef5a084d4e5f74698eb488f9e349c" Feb 26 22:17:29 crc kubenswrapper[4910]: E0226 22:17:29.623920 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-68pwg" podUID="865f4842-373e-4bc9-98cd-4ceabb03b9f9" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.682978 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-799bb5d856-9h5p7"] Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.731628 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc719d44-8187-45f8-80e0-b4e3daa9b1eb" (UID: "bc719d44-8187-45f8-80e0-b4e3daa9b1eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.763609 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.768309 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-pjt4v"] Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.777960 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56765bc48f-nmqd6"] Feb 26 22:17:29 crc kubenswrapper[4910]: W0226 22:17:29.794308 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c7092c9_5c64_48ce_ac3b_a9dd5c4bd3ea.slice/crio-de8ef617c8567ae0229c6b0376c86c06cc0be1366a2094f637eb89b121c2721e WatchSource:0}: Error finding container de8ef617c8567ae0229c6b0376c86c06cc0be1366a2094f637eb89b121c2721e: Status 404 returned error can't find the container with id de8ef617c8567ae0229c6b0376c86c06cc0be1366a2094f637eb89b121c2721e Feb 26 22:17:29 crc kubenswrapper[4910]: W0226 22:17:29.795566 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7088af71_8215_43a1_b8e9_a23d8ff28d96.slice/crio-7b199dc23c816a733c2dcad64faa696f87daf4453161b57a6cace6bce6d5f1d0 WatchSource:0}: Error finding container 7b199dc23c816a733c2dcad64faa696f87daf4453161b57a6cace6bce6d5f1d0: Status 404 returned error can't find the container with id 7b199dc23c816a733c2dcad64faa696f87daf4453161b57a6cace6bce6d5f1d0 Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.812773 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc719d44-8187-45f8-80e0-b4e3daa9b1eb" (UID: "bc719d44-8187-45f8-80e0-b4e3daa9b1eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.822548 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc719d44-8187-45f8-80e0-b4e3daa9b1eb" (UID: "bc719d44-8187-45f8-80e0-b4e3daa9b1eb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.824681 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc719d44-8187-45f8-80e0-b4e3daa9b1eb" (UID: "bc719d44-8187-45f8-80e0-b4e3daa9b1eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.826901 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-config" (OuterVolumeSpecName: "config") pod "bc719d44-8187-45f8-80e0-b4e3daa9b1eb" (UID: "bc719d44-8187-45f8-80e0-b4e3daa9b1eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.865855 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.865898 4910 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.865913 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:29 crc kubenswrapper[4910]: I0226 22:17:29.865928 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc719d44-8187-45f8-80e0-b4e3daa9b1eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.027246 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lmc2p"] Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.037946 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lmc2p"] Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.649544 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" event={"ID":"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea","Type":"ContainerStarted","Data":"de8ef617c8567ae0229c6b0376c86c06cc0be1366a2094f637eb89b121c2721e"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.654915 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed81c10e-0c5e-4483-b210-2d6df693f4f8","Type":"ContainerStarted","Data":"0dc841b89572fa89a60120988416a0a35c49ae24ae720ac567cd7ce60927bb22"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.660189 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71912b57-eb09-4973-aca5-4aec7d7d8fb5","Type":"ContainerStarted","Data":"8b7c93133a0ed3b1075f20e68dd6691cc58f4da7fcd9feb7f772031caeb92746"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.669484 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lxj26" event={"ID":"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75","Type":"ContainerStarted","Data":"da0a17571397fe5aa598a3a2cbd6c2b4bf2e699e5748ea9ba6c6e2e48c434358"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.676393 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799bb5d856-9h5p7" event={"ID":"88d3a1d1-ec96-476e-80bc-ad3784a06411","Type":"ContainerStarted","Data":"5bea8fd8f5c64b550c53ab54de57b8f9670434af453503f202f06a0bd833920c"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.676438 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799bb5d856-9h5p7" event={"ID":"88d3a1d1-ec96-476e-80bc-ad3784a06411","Type":"ContainerStarted","Data":"2706d79aad95948e5be912ed574dedeedb568dec049d9babb7583a38a001858b"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.683070 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerStarted","Data":"03e2c6f282e58c8aadcfe18c580736f090b601212f9feb71ba11569170b03144"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.701847 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2hp89" event={"ID":"58f067fa-7653-4dd7-93ee-bef006c01109","Type":"ContainerStarted","Data":"abfa55ec3c7063c27c5b19141a258688bc3368d72fae40a169336276f35f6e50"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.708006 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lxj26" podStartSLOduration=3.80412597 podStartE2EDuration="44.707985252s" podCreationTimestamp="2026-02-26 22:16:46 +0000 UTC" firstStartedPulling="2026-02-26 22:16:48.615081399 +0000 UTC m=+1293.694571940" lastFinishedPulling="2026-02-26 22:17:29.518940681 +0000 UTC m=+1334.598431222" observedRunningTime="2026-02-26 22:17:30.691937796 +0000 UTC m=+1335.771428347" watchObservedRunningTime="2026-02-26 22:17:30.707985252 +0000 UTC m=+1335.787475793" Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.723027 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2hp89" podStartSLOduration=15.299853219 podStartE2EDuration="44.723006401s" podCreationTimestamp="2026-02-26 22:16:46 +0000 UTC" firstStartedPulling="2026-02-26 22:16:48.655095844 +0000 UTC m=+1293.734586395" lastFinishedPulling="2026-02-26 22:17:18.078248996 +0000 UTC m=+1323.157739577" observedRunningTime="2026-02-26 22:17:30.720391288 +0000 UTC m=+1335.799881829" watchObservedRunningTime="2026-02-26 22:17:30.723006401 +0000 UTC m=+1335.802496942" Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.729037 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56765bc48f-nmqd6" event={"ID":"7088af71-8215-43a1-b8e9-a23d8ff28d96","Type":"ContainerStarted","Data":"7b199dc23c816a733c2dcad64faa696f87daf4453161b57a6cace6bce6d5f1d0"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.743574 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cwz4m" event={"ID":"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f","Type":"ContainerStarted","Data":"84226330c05b1bc831c7c83ef434984cca7b626b02a8b79c806c9c5dc586299c"} Feb 26 22:17:30 crc kubenswrapper[4910]: I0226 22:17:30.772150 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cwz4m" podStartSLOduration=32.772130559 podStartE2EDuration="32.772130559s" podCreationTimestamp="2026-02-26 22:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:30.758012476 +0000 UTC m=+1335.837503027" watchObservedRunningTime="2026-02-26 22:17:30.772130559 +0000 UTC m=+1335.851621100" Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.754814 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799bb5d856-9h5p7" event={"ID":"88d3a1d1-ec96-476e-80bc-ad3784a06411","Type":"ContainerStarted","Data":"5cecc1f4311b11b9736f89a01c748a8125b934e1487389492770c33be995c697"} Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.755336 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.758203 4910 generic.go:334] "Generic (PLEG): container finished" podID="1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" containerID="c415c6938b9542143a51f84d77ed95f82c6d8701f96f5d7df538215ac095f3f6" exitCode=0 Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.758259 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" event={"ID":"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea","Type":"ContainerDied","Data":"c415c6938b9542143a51f84d77ed95f82c6d8701f96f5d7df538215ac095f3f6"} Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.761394 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6xcs7" event={"ID":"d861622f-ed9a-4709-824c-bb291c4639a5","Type":"ContainerStarted","Data":"e5084e2799e64984714eebda9fdfb632754a254d21eca853ae64bf19c293fa06"} Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.772466 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed81c10e-0c5e-4483-b210-2d6df693f4f8","Type":"ContainerStarted","Data":"9d8d97c57656bc6ed1e8a1eb6a2c4bb2a8e2e0cd360f8f3f6151c5ab1feead9d"} Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.772587 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerName="glance-log" containerID="cri-o://0dc841b89572fa89a60120988416a0a35c49ae24ae720ac567cd7ce60927bb22" gracePeriod=30 Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.772734 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerName="glance-httpd" containerID="cri-o://9d8d97c57656bc6ed1e8a1eb6a2c4bb2a8e2e0cd360f8f3f6151c5ab1feead9d" gracePeriod=30 Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.783820 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-799bb5d856-9h5p7" podStartSLOduration=16.783794422 podStartE2EDuration="16.783794422s" podCreationTimestamp="2026-02-26 22:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:31.781835166 +0000 UTC m=+1336.861325707" watchObservedRunningTime="2026-02-26 22:17:31.783794422 +0000 UTC m=+1336.863284983" Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.797453 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56765bc48f-nmqd6" event={"ID":"7088af71-8215-43a1-b8e9-a23d8ff28d96","Type":"ContainerStarted","Data":"78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454"} Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.797499 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56765bc48f-nmqd6" event={"ID":"7088af71-8215-43a1-b8e9-a23d8ff28d96","Type":"ContainerStarted","Data":"cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019"} Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.797739 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.813292 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71912b57-eb09-4973-aca5-4aec7d7d8fb5","Type":"ContainerStarted","Data":"ddb2d802658d8b46a6bbe98d75235adcd5c06c12a84e7bc528871e70d23ed589"} Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.813645 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerName="glance-log" containerID="cri-o://8b7c93133a0ed3b1075f20e68dd6691cc58f4da7fcd9feb7f772031caeb92746" gracePeriod=30 Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.813675 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerName="glance-httpd" containerID="cri-o://ddb2d802658d8b46a6bbe98d75235adcd5c06c12a84e7bc528871e70d23ed589" gracePeriod=30 Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.821752 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6xcs7" podStartSLOduration=4.943035207 podStartE2EDuration="45.821728348s" podCreationTimestamp="2026-02-26 22:16:46 +0000 UTC" firstStartedPulling="2026-02-26 22:16:48.65569801 +0000 UTC m=+1293.735188571" lastFinishedPulling="2026-02-26 22:17:29.534391161 +0000 UTC m=+1334.613881712" observedRunningTime="2026-02-26 22:17:31.807603505 +0000 UTC m=+1336.887094066" watchObservedRunningTime="2026-02-26 22:17:31.821728348 +0000 UTC m=+1336.901218899" Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.847494 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=37.847471594 podStartE2EDuration="37.847471594s" podCreationTimestamp="2026-02-26 22:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:31.837658921 +0000 UTC m=+1336.917149472" watchObservedRunningTime="2026-02-26 22:17:31.847471594 +0000 UTC m=+1336.926962135" Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.943790 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=37.943767506 podStartE2EDuration="37.943767506s" podCreationTimestamp="2026-02-26 22:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:31.940569067 +0000 UTC m=+1337.020059608" watchObservedRunningTime="2026-02-26 22:17:31.943767506 +0000 UTC m=+1337.023258047" Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.949905 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc719d44-8187-45f8-80e0-b4e3daa9b1eb" path="/var/lib/kubelet/pods/bc719d44-8187-45f8-80e0-b4e3daa9b1eb/volumes" Feb 26 22:17:31 crc kubenswrapper[4910]: I0226 22:17:31.970629 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56765bc48f-nmqd6" podStartSLOduration=14.970612374 podStartE2EDuration="14.970612374s" podCreationTimestamp="2026-02-26 22:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:31.969564464 +0000 UTC m=+1337.049055005" watchObservedRunningTime="2026-02-26 22:17:31.970612374 +0000 UTC m=+1337.050102915" Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.831397 4910 generic.go:334] "Generic (PLEG): container finished" podID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerID="ddb2d802658d8b46a6bbe98d75235adcd5c06c12a84e7bc528871e70d23ed589" exitCode=0 Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.831672 4910 generic.go:334] "Generic (PLEG): container finished" podID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerID="8b7c93133a0ed3b1075f20e68dd6691cc58f4da7fcd9feb7f772031caeb92746" exitCode=143 Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.831626 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71912b57-eb09-4973-aca5-4aec7d7d8fb5","Type":"ContainerDied","Data":"ddb2d802658d8b46a6bbe98d75235adcd5c06c12a84e7bc528871e70d23ed589"} Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.831750 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71912b57-eb09-4973-aca5-4aec7d7d8fb5","Type":"ContainerDied","Data":"8b7c93133a0ed3b1075f20e68dd6691cc58f4da7fcd9feb7f772031caeb92746"} Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.835179 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" event={"ID":"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea","Type":"ContainerStarted","Data":"2c2826db128ffc0cf57feffdc85c783433554cbb7bb78d5c57ca1e4af3e7d85c"} Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.835369 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.841659 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerStarted","Data":"ee50a00d864892d417a029125914b9b85eb57efea05f62fda9f836ad89dd1c9a"} Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.848380 4910 generic.go:334] "Generic (PLEG): container finished" podID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerID="9d8d97c57656bc6ed1e8a1eb6a2c4bb2a8e2e0cd360f8f3f6151c5ab1feead9d" exitCode=0 Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.848421 4910 generic.go:334] "Generic (PLEG): container finished" podID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerID="0dc841b89572fa89a60120988416a0a35c49ae24ae720ac567cd7ce60927bb22" exitCode=143 Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.848472 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed81c10e-0c5e-4483-b210-2d6df693f4f8","Type":"ContainerDied","Data":"9d8d97c57656bc6ed1e8a1eb6a2c4bb2a8e2e0cd360f8f3f6151c5ab1feead9d"} Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.848505 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed81c10e-0c5e-4483-b210-2d6df693f4f8","Type":"ContainerDied","Data":"0dc841b89572fa89a60120988416a0a35c49ae24ae720ac567cd7ce60927bb22"} Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.853578 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5ac37f44-e173-4927-b8ea-44741aa983c0","Type":"ContainerStarted","Data":"d5e87eff17f5b2645684a812f9e42a5b8e870630b2164547a57a3d4e12d19d07"} Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.853625 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5ac37f44-e173-4927-b8ea-44741aa983c0","Type":"ContainerStarted","Data":"317f84a9bfa1fc1fd854bc3544b8d3344a9edc678cd12e48d0c9a24ead1613a5"} Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.872407 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" podStartSLOduration=17.872155759 podStartE2EDuration="17.872155759s" podCreationTimestamp="2026-02-26 22:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:32.865505884 +0000 UTC m=+1337.944996445" watchObservedRunningTime="2026-02-26 22:17:32.872155759 +0000 UTC m=+1337.951646300" Feb 26 22:17:32 crc kubenswrapper[4910]: I0226 22:17:32.895146 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=47.895126199 podStartE2EDuration="47.895126199s" podCreationTimestamp="2026-02-26 22:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:32.892799535 +0000 UTC m=+1337.972290086" watchObservedRunningTime="2026-02-26 22:17:32.895126199 +0000 UTC m=+1337.974616740" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.367786 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.440633 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-config-data\") pod \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.440706 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-logs\") pod \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.440742 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxm8p\" (UniqueName: \"kubernetes.io/projected/ed81c10e-0c5e-4483-b210-2d6df693f4f8-kube-api-access-kxm8p\") pod \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.440785 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-httpd-run\") pod \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.440994 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.441103 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-combined-ca-bundle\") pod \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.441271 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-scripts\") pod \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\" (UID: \"ed81c10e-0c5e-4483-b210-2d6df693f4f8\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.442243 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ed81c10e-0c5e-4483-b210-2d6df693f4f8" (UID: "ed81c10e-0c5e-4483-b210-2d6df693f4f8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.442743 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-logs" (OuterVolumeSpecName: "logs") pod "ed81c10e-0c5e-4483-b210-2d6df693f4f8" (UID: "ed81c10e-0c5e-4483-b210-2d6df693f4f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.449370 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed81c10e-0c5e-4483-b210-2d6df693f4f8-kube-api-access-kxm8p" (OuterVolumeSpecName: "kube-api-access-kxm8p") pod "ed81c10e-0c5e-4483-b210-2d6df693f4f8" (UID: "ed81c10e-0c5e-4483-b210-2d6df693f4f8"). InnerVolumeSpecName "kube-api-access-kxm8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.457352 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-scripts" (OuterVolumeSpecName: "scripts") pod "ed81c10e-0c5e-4483-b210-2d6df693f4f8" (UID: "ed81c10e-0c5e-4483-b210-2d6df693f4f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.465787 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e" (OuterVolumeSpecName: "glance") pod "ed81c10e-0c5e-4483-b210-2d6df693f4f8" (UID: "ed81c10e-0c5e-4483-b210-2d6df693f4f8"). InnerVolumeSpecName "pvc-07357e7e-76cc-49df-b0f0-87819efba45e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.479925 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed81c10e-0c5e-4483-b210-2d6df693f4f8" (UID: "ed81c10e-0c5e-4483-b210-2d6df693f4f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.502015 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-config-data" (OuterVolumeSpecName: "config-data") pod "ed81c10e-0c5e-4483-b210-2d6df693f4f8" (UID: "ed81c10e-0c5e-4483-b210-2d6df693f4f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.519331 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.543340 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-httpd-run\") pod \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.543391 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-scripts\") pod \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.543476 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-config-data\") pod \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.543599 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.543635 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5wn7\" (UniqueName: \"kubernetes.io/projected/71912b57-eb09-4973-aca5-4aec7d7d8fb5-kube-api-access-w5wn7\") pod \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.543679 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-logs\") pod \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.543866 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-combined-ca-bundle\") pod \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\" (UID: \"71912b57-eb09-4973-aca5-4aec7d7d8fb5\") " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.544236 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "71912b57-eb09-4973-aca5-4aec7d7d8fb5" (UID: "71912b57-eb09-4973-aca5-4aec7d7d8fb5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.544698 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.544713 4910 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.544724 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.544732 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.544741 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxm8p\" (UniqueName: \"kubernetes.io/projected/ed81c10e-0c5e-4483-b210-2d6df693f4f8-kube-api-access-kxm8p\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.544750 4910 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed81c10e-0c5e-4483-b210-2d6df693f4f8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.544774 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") on node \"crc\" " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.544784 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed81c10e-0c5e-4483-b210-2d6df693f4f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.551121 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-scripts" (OuterVolumeSpecName: "scripts") pod "71912b57-eb09-4973-aca5-4aec7d7d8fb5" (UID: "71912b57-eb09-4973-aca5-4aec7d7d8fb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.551787 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-logs" (OuterVolumeSpecName: "logs") pod "71912b57-eb09-4973-aca5-4aec7d7d8fb5" (UID: "71912b57-eb09-4973-aca5-4aec7d7d8fb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.561710 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71912b57-eb09-4973-aca5-4aec7d7d8fb5-kube-api-access-w5wn7" (OuterVolumeSpecName: "kube-api-access-w5wn7") pod "71912b57-eb09-4973-aca5-4aec7d7d8fb5" (UID: "71912b57-eb09-4973-aca5-4aec7d7d8fb5"). InnerVolumeSpecName "kube-api-access-w5wn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.564868 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0" (OuterVolumeSpecName: "glance") pod "71912b57-eb09-4973-aca5-4aec7d7d8fb5" (UID: "71912b57-eb09-4973-aca5-4aec7d7d8fb5"). InnerVolumeSpecName "pvc-2ef62248-3f7a-4c99-851b-abb253e36db0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.583008 4910 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.583244 4910 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-07357e7e-76cc-49df-b0f0-87819efba45e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e") on node "crc" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.588238 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71912b57-eb09-4973-aca5-4aec7d7d8fb5" (UID: "71912b57-eb09-4973-aca5-4aec7d7d8fb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.633728 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-config-data" (OuterVolumeSpecName: "config-data") pod "71912b57-eb09-4973-aca5-4aec7d7d8fb5" (UID: "71912b57-eb09-4973-aca5-4aec7d7d8fb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.646434 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") on node \"crc\" " Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.647249 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5wn7\" (UniqueName: \"kubernetes.io/projected/71912b57-eb09-4973-aca5-4aec7d7d8fb5-kube-api-access-w5wn7\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.647292 4910 reconciler_common.go:293] "Volume detached for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.647304 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71912b57-eb09-4973-aca5-4aec7d7d8fb5-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.647317 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.647327 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.647337 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71912b57-eb09-4973-aca5-4aec7d7d8fb5-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.670579 4910 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.670724 4910 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2ef62248-3f7a-4c99-851b-abb253e36db0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0") on node "crc" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.748997 4910 reconciler_common.go:293] "Volume detached for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.868409 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed81c10e-0c5e-4483-b210-2d6df693f4f8","Type":"ContainerDied","Data":"9d5dac6e0bfa4065095432b20e81131e2d87bbf597ab1563391f5a03b59dabec"} Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.868461 4910 scope.go:117] "RemoveContainer" containerID="9d8d97c57656bc6ed1e8a1eb6a2c4bb2a8e2e0cd360f8f3f6151c5ab1feead9d" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.868467 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.878790 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71912b57-eb09-4973-aca5-4aec7d7d8fb5","Type":"ContainerDied","Data":"745a48b3325db1fdc1c832b7dfbbef7388a09167a9b593c3a6448d05407d6e7d"} Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.878803 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.882279 4910 generic.go:334] "Generic (PLEG): container finished" podID="5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" containerID="84226330c05b1bc831c7c83ef434984cca7b626b02a8b79c806c9c5dc586299c" exitCode=0 Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.882345 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cwz4m" event={"ID":"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f","Type":"ContainerDied","Data":"84226330c05b1bc831c7c83ef434984cca7b626b02a8b79c806c9c5dc586299c"} Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.907329 4910 scope.go:117] "RemoveContainer" containerID="0dc841b89572fa89a60120988416a0a35c49ae24ae720ac567cd7ce60927bb22" Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.964616 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.964652 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:17:33 crc kubenswrapper[4910]: I0226 22:17:33.980879 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.001971 4910 scope.go:117] "RemoveContainer" containerID="ddb2d802658d8b46a6bbe98d75235adcd5c06c12a84e7bc528871e70d23ed589" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.002595 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.015834 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:17:34 crc kubenswrapper[4910]: E0226 22:17:34.016327 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerName="glance-log" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016345 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerName="glance-log" Feb 26 22:17:34 crc kubenswrapper[4910]: E0226 22:17:34.016357 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc719d44-8187-45f8-80e0-b4e3daa9b1eb" containerName="init" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016364 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc719d44-8187-45f8-80e0-b4e3daa9b1eb" containerName="init" Feb 26 22:17:34 crc kubenswrapper[4910]: E0226 22:17:34.016379 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerName="glance-log" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016385 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerName="glance-log" Feb 26 22:17:34 crc kubenswrapper[4910]: E0226 22:17:34.016397 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerName="glance-httpd" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016403 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerName="glance-httpd" Feb 26 22:17:34 crc kubenswrapper[4910]: E0226 22:17:34.016439 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerName="glance-httpd" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016445 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerName="glance-httpd" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016615 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerName="glance-httpd" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016628 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerName="glance-httpd" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016637 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc719d44-8187-45f8-80e0-b4e3daa9b1eb" containerName="init" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016651 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" containerName="glance-log" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.016663 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" containerName="glance-log" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.017709 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.019398 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wmnc7" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.019963 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.022875 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.023690 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.033275 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.041294 4910 scope.go:117] "RemoveContainer" containerID="8b7c93133a0ed3b1075f20e68dd6691cc58f4da7fcd9feb7f772031caeb92746" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.053402 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.089548 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.093819 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.094605 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.115346 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.166985 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.167350 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.167740 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.168006 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4295t\" (UniqueName: \"kubernetes.io/projected/81ca6ff2-72d9-4372-93e5-148de7e30e3c-kube-api-access-4295t\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.168133 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.168234 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.168321 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.168503 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtjk\" (UniqueName: \"kubernetes.io/projected/98d72f18-06a6-49d0-a63b-343d3fea1bb2-kube-api-access-shtjk\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.168653 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.168758 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.168960 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.169062 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.169181 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-config-data\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.169333 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-logs\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.169442 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-scripts\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.169574 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-logs\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272361 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shtjk\" (UniqueName: \"kubernetes.io/projected/98d72f18-06a6-49d0-a63b-343d3fea1bb2-kube-api-access-shtjk\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272439 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272463 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272560 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272584 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272611 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-config-data\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272655 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-logs\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272678 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-scripts\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272709 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-logs\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272731 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272765 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272795 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272845 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4295t\" (UniqueName: \"kubernetes.io/projected/81ca6ff2-72d9-4372-93e5-148de7e30e3c-kube-api-access-4295t\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272870 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272891 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.272918 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.274325 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.274781 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.275270 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-logs\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.275586 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-logs\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.277949 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.278388 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.278579 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.278608 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1fe3b589a0972626c208f995164ae337879055052c9895e085608499baca4b3/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.278654 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.278681 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94049027790e22a18bf8e430a446734369cbf41eb4d29ad3f70f496aca7abf57/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.279306 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.280017 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.284900 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-scripts\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.285153 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.292972 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.293091 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-config-data\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.293148 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shtjk\" (UniqueName: \"kubernetes.io/projected/98d72f18-06a6-49d0-a63b-343d3fea1bb2-kube-api-access-shtjk\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.294913 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4295t\" (UniqueName: \"kubernetes.io/projected/81ca6ff2-72d9-4372-93e5-148de7e30e3c-kube-api-access-4295t\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.326498 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.330036 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.360643 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.438602 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.898651 4910 generic.go:334] "Generic (PLEG): container finished" podID="58f067fa-7653-4dd7-93ee-bef006c01109" containerID="abfa55ec3c7063c27c5b19141a258688bc3368d72fae40a169336276f35f6e50" exitCode=0 Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.899082 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2hp89" event={"ID":"58f067fa-7653-4dd7-93ee-bef006c01109","Type":"ContainerDied","Data":"abfa55ec3c7063c27c5b19141a258688bc3368d72fae40a169336276f35f6e50"} Feb 26 22:17:34 crc kubenswrapper[4910]: I0226 22:17:34.963934 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:17:35 crc kubenswrapper[4910]: I0226 22:17:35.053780 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:17:35 crc kubenswrapper[4910]: I0226 22:17:35.737411 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 26 22:17:35 crc kubenswrapper[4910]: I0226 22:17:35.923990 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71912b57-eb09-4973-aca5-4aec7d7d8fb5" path="/var/lib/kubelet/pods/71912b57-eb09-4973-aca5-4aec7d7d8fb5/volumes" Feb 26 22:17:35 crc kubenswrapper[4910]: I0226 22:17:35.933488 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed81c10e-0c5e-4483-b210-2d6df693f4f8" path="/var/lib/kubelet/pods/ed81c10e-0c5e-4483-b210-2d6df693f4f8/volumes" Feb 26 22:17:35 crc kubenswrapper[4910]: I0226 22:17:35.934774 4910 generic.go:334] "Generic (PLEG): container finished" podID="eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" containerID="da0a17571397fe5aa598a3a2cbd6c2b4bf2e699e5748ea9ba6c6e2e48c434358" exitCode=0 Feb 26 22:17:35 crc kubenswrapper[4910]: I0226 22:17:35.934986 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lxj26" event={"ID":"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75","Type":"ContainerDied","Data":"da0a17571397fe5aa598a3a2cbd6c2b4bf2e699e5748ea9ba6c6e2e48c434358"} Feb 26 22:17:37 crc kubenswrapper[4910]: I0226 22:17:37.960012 4910 generic.go:334] "Generic (PLEG): container finished" podID="d861622f-ed9a-4709-824c-bb291c4639a5" containerID="e5084e2799e64984714eebda9fdfb632754a254d21eca853ae64bf19c293fa06" exitCode=0 Feb 26 22:17:37 crc kubenswrapper[4910]: I0226 22:17:37.960089 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6xcs7" event={"ID":"d861622f-ed9a-4709-824c-bb291c4639a5","Type":"ContainerDied","Data":"e5084e2799e64984714eebda9fdfb632754a254d21eca853ae64bf19c293fa06"} Feb 26 22:17:38 crc kubenswrapper[4910]: W0226 22:17:38.563806 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ca6ff2_72d9_4372_93e5_148de7e30e3c.slice/crio-15c6b461083d7b4f2b57886c4f4cdc7a2767e38c8bb43ab98d456df920b230c5 WatchSource:0}: Error finding container 15c6b461083d7b4f2b57886c4f4cdc7a2767e38c8bb43ab98d456df920b230c5: Status 404 returned error can't find the container with id 15c6b461083d7b4f2b57886c4f4cdc7a2767e38c8bb43ab98d456df920b230c5 Feb 26 22:17:38 crc kubenswrapper[4910]: W0226 22:17:38.570675 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98d72f18_06a6_49d0_a63b_343d3fea1bb2.slice/crio-7635cff7ba2b951aa46a2902ecad3133e11deed4d6aecd0aa18fe72aee9d9a0a WatchSource:0}: Error finding container 7635cff7ba2b951aa46a2902ecad3133e11deed4d6aecd0aa18fe72aee9d9a0a: Status 404 returned error can't find the container with id 7635cff7ba2b951aa46a2902ecad3133e11deed4d6aecd0aa18fe72aee9d9a0a Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.870576 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.877130 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lxj26" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.922458 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2hp89" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967201 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-config-data\") pod \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967274 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dccm\" (UniqueName: \"kubernetes.io/projected/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-kube-api-access-7dccm\") pod \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967295 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-scripts\") pod \"58f067fa-7653-4dd7-93ee-bef006c01109\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967348 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-combined-ca-bundle\") pod \"58f067fa-7653-4dd7-93ee-bef006c01109\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967405 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-scripts\") pod \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967437 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-combined-ca-bundle\") pod \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967483 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8tkp\" (UniqueName: \"kubernetes.io/projected/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-kube-api-access-m8tkp\") pod \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967530 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-combined-ca-bundle\") pod \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967557 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-db-sync-config-data\") pod \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\" (UID: \"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967579 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-fernet-keys\") pod \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967623 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-config-data\") pod \"58f067fa-7653-4dd7-93ee-bef006c01109\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967667 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf7tx\" (UniqueName: \"kubernetes.io/projected/58f067fa-7653-4dd7-93ee-bef006c01109-kube-api-access-tf7tx\") pod \"58f067fa-7653-4dd7-93ee-bef006c01109\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967712 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-credential-keys\") pod \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\" (UID: \"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.967744 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f067fa-7653-4dd7-93ee-bef006c01109-logs\") pod \"58f067fa-7653-4dd7-93ee-bef006c01109\" (UID: \"58f067fa-7653-4dd7-93ee-bef006c01109\") " Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.968799 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58f067fa-7653-4dd7-93ee-bef006c01109-logs" (OuterVolumeSpecName: "logs") pod "58f067fa-7653-4dd7-93ee-bef006c01109" (UID: "58f067fa-7653-4dd7-93ee-bef006c01109"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.973469 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-scripts" (OuterVolumeSpecName: "scripts") pod "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" (UID: "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.978320 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-kube-api-access-7dccm" (OuterVolumeSpecName: "kube-api-access-7dccm") pod "eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" (UID: "eeb12d5b-0ec7-48d5-b1ef-9e378c030b75"). InnerVolumeSpecName "kube-api-access-7dccm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.978414 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-scripts" (OuterVolumeSpecName: "scripts") pod "58f067fa-7653-4dd7-93ee-bef006c01109" (UID: "58f067fa-7653-4dd7-93ee-bef006c01109"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.978637 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" (UID: "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.979010 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lxj26" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.979008 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lxj26" event={"ID":"eeb12d5b-0ec7-48d5-b1ef-9e378c030b75","Type":"ContainerDied","Data":"ccf59614b993c1f801c733ebb7e16e88b98d719f55aec99a3ffbb826eea396f2"} Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.979142 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccf59614b993c1f801c733ebb7e16e88b98d719f55aec99a3ffbb826eea396f2" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.982537 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cwz4m" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.982552 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cwz4m" event={"ID":"5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f","Type":"ContainerDied","Data":"c720ce29f22c9a0865b71409a79ebcfc8c676b2d84dbb8704fb78f2d16891bf4"} Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.982673 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c720ce29f22c9a0865b71409a79ebcfc8c676b2d84dbb8704fb78f2d16891bf4" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.984254 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ca6ff2-72d9-4372-93e5-148de7e30e3c","Type":"ContainerStarted","Data":"15c6b461083d7b4f2b57886c4f4cdc7a2767e38c8bb43ab98d456df920b230c5"} Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.985253 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-kube-api-access-m8tkp" (OuterVolumeSpecName: "kube-api-access-m8tkp") pod "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" (UID: "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f"). InnerVolumeSpecName "kube-api-access-m8tkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.987401 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f067fa-7653-4dd7-93ee-bef006c01109-kube-api-access-tf7tx" (OuterVolumeSpecName: "kube-api-access-tf7tx") pod "58f067fa-7653-4dd7-93ee-bef006c01109" (UID: "58f067fa-7653-4dd7-93ee-bef006c01109"). InnerVolumeSpecName "kube-api-access-tf7tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.992943 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" (UID: "eeb12d5b-0ec7-48d5-b1ef-9e378c030b75"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.993273 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" (UID: "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:38 crc kubenswrapper[4910]: I0226 22:17:38.995517 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98d72f18-06a6-49d0-a63b-343d3fea1bb2","Type":"ContainerStarted","Data":"7635cff7ba2b951aa46a2902ecad3133e11deed4d6aecd0aa18fe72aee9d9a0a"} Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.026646 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2hp89" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.027053 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2hp89" event={"ID":"58f067fa-7653-4dd7-93ee-bef006c01109","Type":"ContainerDied","Data":"b9bfe0da523e64440bf1ff147295a224e8dbd4a9415da4a4b06564542057b79b"} Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.027082 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9bfe0da523e64440bf1ff147295a224e8dbd4a9415da4a4b06564542057b79b" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.030448 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-config-data" (OuterVolumeSpecName: "config-data") pod "58f067fa-7653-4dd7-93ee-bef006c01109" (UID: "58f067fa-7653-4dd7-93ee-bef006c01109"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.032329 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" (UID: "eeb12d5b-0ec7-48d5-b1ef-9e378c030b75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.039553 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-config-data" (OuterVolumeSpecName: "config-data") pod "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" (UID: "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.047775 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58f067fa-7653-4dd7-93ee-bef006c01109" (UID: "58f067fa-7653-4dd7-93ee-bef006c01109"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.053513 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" (UID: "5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069475 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069517 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf7tx\" (UniqueName: \"kubernetes.io/projected/58f067fa-7653-4dd7-93ee-bef006c01109-kube-api-access-tf7tx\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069530 4910 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069540 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f067fa-7653-4dd7-93ee-bef006c01109-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069550 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069560 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dccm\" (UniqueName: \"kubernetes.io/projected/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-kube-api-access-7dccm\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069567 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069596 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f067fa-7653-4dd7-93ee-bef006c01109-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069606 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069615 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069626 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8tkp\" (UniqueName: \"kubernetes.io/projected/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-kube-api-access-m8tkp\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069635 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069645 4910 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.069654 4910 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.382731 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.476046 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ct4z\" (UniqueName: \"kubernetes.io/projected/d861622f-ed9a-4709-824c-bb291c4639a5-kube-api-access-6ct4z\") pod \"d861622f-ed9a-4709-824c-bb291c4639a5\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.476469 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d861622f-ed9a-4709-824c-bb291c4639a5-etc-machine-id\") pod \"d861622f-ed9a-4709-824c-bb291c4639a5\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.476569 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-db-sync-config-data\") pod \"d861622f-ed9a-4709-824c-bb291c4639a5\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.476669 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-scripts\") pod \"d861622f-ed9a-4709-824c-bb291c4639a5\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.476640 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d861622f-ed9a-4709-824c-bb291c4639a5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d861622f-ed9a-4709-824c-bb291c4639a5" (UID: "d861622f-ed9a-4709-824c-bb291c4639a5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.476730 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-combined-ca-bundle\") pod \"d861622f-ed9a-4709-824c-bb291c4639a5\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.476877 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-config-data\") pod \"d861622f-ed9a-4709-824c-bb291c4639a5\" (UID: \"d861622f-ed9a-4709-824c-bb291c4639a5\") " Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.477314 4910 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d861622f-ed9a-4709-824c-bb291c4639a5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.480262 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d861622f-ed9a-4709-824c-bb291c4639a5" (UID: "d861622f-ed9a-4709-824c-bb291c4639a5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.480668 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d861622f-ed9a-4709-824c-bb291c4639a5-kube-api-access-6ct4z" (OuterVolumeSpecName: "kube-api-access-6ct4z") pod "d861622f-ed9a-4709-824c-bb291c4639a5" (UID: "d861622f-ed9a-4709-824c-bb291c4639a5"). InnerVolumeSpecName "kube-api-access-6ct4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.481104 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-scripts" (OuterVolumeSpecName: "scripts") pod "d861622f-ed9a-4709-824c-bb291c4639a5" (UID: "d861622f-ed9a-4709-824c-bb291c4639a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.505294 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d861622f-ed9a-4709-824c-bb291c4639a5" (UID: "d861622f-ed9a-4709-824c-bb291c4639a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.539733 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-config-data" (OuterVolumeSpecName: "config-data") pod "d861622f-ed9a-4709-824c-bb291c4639a5" (UID: "d861622f-ed9a-4709-824c-bb291c4639a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.579858 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.579906 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.579919 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ct4z\" (UniqueName: \"kubernetes.io/projected/d861622f-ed9a-4709-824c-bb291c4639a5-kube-api-access-6ct4z\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.579932 4910 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:39 crc kubenswrapper[4910]: I0226 22:17:39.579944 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d861622f-ed9a-4709-824c-bb291c4639a5-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.081061 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ca6ff2-72d9-4372-93e5-148de7e30e3c","Type":"ContainerStarted","Data":"939f95704fd10da50c6094029ae883c6cf9d0e7232a0425edcb183c06c179e5b"} Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.081349 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ca6ff2-72d9-4372-93e5-148de7e30e3c","Type":"ContainerStarted","Data":"5cf6e219d1b3420da9d0a257741e8ce6156dc794b80fabb8f1857d13f5884fb2"} Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.099505 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerStarted","Data":"86ba7cfa670f936b2c22be3638236e7de871d9a239017e74755304996f2a06cd"} Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.101373 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7b97844b46-5cn8n"] Feb 26 22:17:40 crc kubenswrapper[4910]: E0226 22:17:40.101830 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" containerName="barbican-db-sync" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.101848 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" containerName="barbican-db-sync" Feb 26 22:17:40 crc kubenswrapper[4910]: E0226 22:17:40.101866 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d861622f-ed9a-4709-824c-bb291c4639a5" containerName="cinder-db-sync" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.101872 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d861622f-ed9a-4709-824c-bb291c4639a5" containerName="cinder-db-sync" Feb 26 22:17:40 crc kubenswrapper[4910]: E0226 22:17:40.101883 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" containerName="keystone-bootstrap" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.101889 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" containerName="keystone-bootstrap" Feb 26 22:17:40 crc kubenswrapper[4910]: E0226 22:17:40.101902 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f067fa-7653-4dd7-93ee-bef006c01109" containerName="placement-db-sync" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.101908 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f067fa-7653-4dd7-93ee-bef006c01109" containerName="placement-db-sync" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.102231 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d861622f-ed9a-4709-824c-bb291c4639a5" containerName="cinder-db-sync" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.102247 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" containerName="barbican-db-sync" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.102255 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" containerName="keystone-bootstrap" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.102269 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f067fa-7653-4dd7-93ee-bef006c01109" containerName="placement-db-sync" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.105544 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98d72f18-06a6-49d0-a63b-343d3fea1bb2","Type":"ContainerStarted","Data":"f6d077132a0e46fb64a9ec3b4e3d969c97022d5ab74f6cc64d48f0817755c4b1"} Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.105677 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98d72f18-06a6-49d0-a63b-343d3fea1bb2","Type":"ContainerStarted","Data":"a9faa3beeb89e54f6198f575b430ba30d8029de2e9e2367d3dfc1f745b400cfc"} Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.105649 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.111153 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6xcs7" event={"ID":"d861622f-ed9a-4709-824c-bb291c4639a5","Type":"ContainerDied","Data":"f03fe6e62667a56206e77cab0cf82510f09834dda185233da06368b1cfb97279"} Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.111199 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f03fe6e62667a56206e77cab0cf82510f09834dda185233da06368b1cfb97279" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.111278 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6xcs7" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.129091 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.131127 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.131526 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2mnfl" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.131740 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.133497 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.133673 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79f8b87c99-7mvnv"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.134886 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.142704 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r29m2" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.143137 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.143342 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.143440 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.143566 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.143599 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.151243 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.151223324 podStartE2EDuration="7.151223324s" podCreationTimestamp="2026-02-26 22:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:40.115974763 +0000 UTC m=+1345.195465324" watchObservedRunningTime="2026-02-26 22:17:40.151223324 +0000 UTC m=+1345.230713865" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.220378 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79f8b87c99-7mvnv"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.251930 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7b97844b46-5cn8n"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.282128 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.282104598 podStartE2EDuration="7.282104598s" podCreationTimestamp="2026-02-26 22:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:40.208278603 +0000 UTC m=+1345.287769144" watchObservedRunningTime="2026-02-26 22:17:40.282104598 +0000 UTC m=+1345.361595139" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301499 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-config-data\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301554 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-combined-ca-bundle\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301580 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-config-data\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301604 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-scripts\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301622 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-credential-keys\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301645 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-public-tls-certs\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301707 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-combined-ca-bundle\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301734 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79453bea-3afe-4822-a09c-734dba08b9ef-logs\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301751 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-internal-tls-certs\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301791 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbxk\" (UniqueName: \"kubernetes.io/projected/c9e24dbd-bccd-4c17-b640-b183d1f296e7-kube-api-access-qlbxk\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301828 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncd76\" (UniqueName: \"kubernetes.io/projected/79453bea-3afe-4822-a09c-734dba08b9ef-kube-api-access-ncd76\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301843 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-fernet-keys\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301883 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-internal-tls-certs\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301918 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-scripts\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.301938 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-public-tls-certs\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.340212 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7c8489996f-ljk4s"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.341989 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.355962 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.357715 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c8489996f-ljk4s"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.373410 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wh59c" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.373503 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403453 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlbxk\" (UniqueName: \"kubernetes.io/projected/c9e24dbd-bccd-4c17-b640-b183d1f296e7-kube-api-access-qlbxk\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403514 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncd76\" (UniqueName: \"kubernetes.io/projected/79453bea-3afe-4822-a09c-734dba08b9ef-kube-api-access-ncd76\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403533 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-fernet-keys\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403561 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-internal-tls-certs\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403589 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-scripts\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403615 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-public-tls-certs\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403647 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-config-data\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403673 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-combined-ca-bundle\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403696 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-config-data\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403716 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-scripts\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403732 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-credential-keys\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403756 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-public-tls-certs\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403803 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-combined-ca-bundle\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403831 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79453bea-3afe-4822-a09c-734dba08b9ef-logs\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.403847 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-internal-tls-certs\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.410866 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5dd849cb94-qw888"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.412533 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.419691 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.420507 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79453bea-3afe-4822-a09c-734dba08b9ef-logs\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.446458 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-combined-ca-bundle\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.447446 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-credential-keys\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.458043 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5dd849cb94-qw888"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.458921 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-public-tls-certs\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.459537 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-internal-tls-certs\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.459543 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-scripts\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.460015 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-config-data\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.460186 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-public-tls-certs\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.464738 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlbxk\" (UniqueName: \"kubernetes.io/projected/c9e24dbd-bccd-4c17-b640-b183d1f296e7-kube-api-access-qlbxk\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.465087 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-scripts\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.468760 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-combined-ca-bundle\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.473812 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-internal-tls-certs\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.474913 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9e24dbd-bccd-4c17-b640-b183d1f296e7-fernet-keys\") pod \"keystone-79f8b87c99-7mvnv\" (UID: \"c9e24dbd-bccd-4c17-b640-b183d1f296e7\") " pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.487299 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-config-data\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.501095 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncd76\" (UniqueName: \"kubernetes.io/projected/79453bea-3afe-4822-a09c-734dba08b9ef-kube-api-access-ncd76\") pod \"placement-7b97844b46-5cn8n\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505670 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42767c30-5b6b-4df6-9237-962c97165901-logs\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505728 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505755 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m49gn\" (UniqueName: \"kubernetes.io/projected/42767c30-5b6b-4df6-9237-962c97165901-kube-api-access-m49gn\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505775 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data-custom\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505799 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-combined-ca-bundle\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505835 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data-custom\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505857 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-combined-ca-bundle\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505884 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcsjt\" (UniqueName: \"kubernetes.io/projected/64d0c766-199e-40bf-b21c-ed64d433a17d-kube-api-access-fcsjt\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505900 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.505928 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d0c766-199e-40bf-b21c-ed64d433a17d-logs\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.526277 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.527891 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.539988 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.540261 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ll6bd" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.540479 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.540636 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.545748 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.563144 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-pjt4v"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.619530 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae349f4d-1586-4ba1-9b81-e84503327e71-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.619611 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.619755 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m49gn\" (UniqueName: \"kubernetes.io/projected/42767c30-5b6b-4df6-9237-962c97165901-kube-api-access-m49gn\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.619782 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data-custom\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.619931 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-combined-ca-bundle\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.620091 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data-custom\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.620113 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22clw\" (UniqueName: \"kubernetes.io/projected/ae349f4d-1586-4ba1-9b81-e84503327e71-kube-api-access-22clw\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.620247 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-combined-ca-bundle\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.620558 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcsjt\" (UniqueName: \"kubernetes.io/projected/64d0c766-199e-40bf-b21c-ed64d433a17d-kube-api-access-fcsjt\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.620585 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.620731 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d0c766-199e-40bf-b21c-ed64d433a17d-logs\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.620883 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.620918 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.621188 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.621217 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.621291 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42767c30-5b6b-4df6-9237-962c97165901-logs\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.622496 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42767c30-5b6b-4df6-9237-962c97165901-logs\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.629570 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d0c766-199e-40bf-b21c-ed64d433a17d-logs\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.636672 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.648503 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.660944 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-combined-ca-bundle\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.661762 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-combined-ca-bundle\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.663160 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data-custom\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.663319 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.666048 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcsjt\" (UniqueName: \"kubernetes.io/projected/64d0c766-199e-40bf-b21c-ed64d433a17d-kube-api-access-fcsjt\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.668140 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m49gn\" (UniqueName: \"kubernetes.io/projected/42767c30-5b6b-4df6-9237-962c97165901-kube-api-access-m49gn\") pod \"barbican-worker-7c8489996f-ljk4s\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.669798 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data-custom\") pod \"barbican-keystone-listener-5dd849cb94-qw888\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.742934 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.754534 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.785836 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.785899 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.785957 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.785990 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.786035 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae349f4d-1586-4ba1-9b81-e84503327e71-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.786123 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22clw\" (UniqueName: \"kubernetes.io/projected/ae349f4d-1586-4ba1-9b81-e84503327e71-kube-api-access-22clw\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.787931 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae349f4d-1586-4ba1-9b81-e84503327e71-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.804189 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.812921 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.821027 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.821437 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.891068 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22clw\" (UniqueName: \"kubernetes.io/projected/ae349f4d-1586-4ba1-9b81-e84503327e71-kube-api-access-22clw\") pod \"cinder-scheduler-0\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " pod="openstack/cinder-scheduler-0" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.903412 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.919407 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f7nff"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.921499 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.951220 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f7nff"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.965589 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.989215 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57dd4f98-9xlb7"] Feb 26 22:17:40 crc kubenswrapper[4910]: I0226 22:17:40.991091 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.001988 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57dd4f98-9xlb7"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.008916 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f7nff"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.018751 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7696b9558b-vr9cd"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.020353 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.020442 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.037332 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-65889bccf5-h979r"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.039029 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.043432 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9b8rx"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.045266 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.052861 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.099728 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7696b9558b-vr9cd"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.100964 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.100991 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.101031 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae6f131-7b0b-4146-a1ed-640ad6302dca-logs\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.101070 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data-custom\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.101093 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-combined-ca-bundle\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.101126 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.101153 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8jqw\" (UniqueName: \"kubernetes.io/projected/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-kube-api-access-x8jqw\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.101187 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.101204 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-config\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.101224 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.101239 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvdb\" (UniqueName: \"kubernetes.io/projected/cae6f131-7b0b-4146-a1ed-640ad6302dca-kube-api-access-2rvdb\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.131033 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" podUID="1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" containerName="dnsmasq-dns" containerID="cri-o://2c2826db128ffc0cf57feffdc85c783433554cbb7bb78d5c57ca1e4af3e7d85c" gracePeriod=10 Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.174231 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9b8rx"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204343 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8jqw\" (UniqueName: \"kubernetes.io/projected/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-kube-api-access-x8jqw\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204392 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204413 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/752b8add-84e6-43cd-9907-6e190e335fe7-config-data\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204435 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-config\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204454 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxch\" (UniqueName: \"kubernetes.io/projected/f7238de5-2d97-4467-b06f-937763173cac-kube-api-access-mzxch\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204477 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204494 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rvdb\" (UniqueName: \"kubernetes.io/projected/cae6f131-7b0b-4146-a1ed-640ad6302dca-kube-api-access-2rvdb\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204534 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/752b8add-84e6-43cd-9907-6e190e335fe7-config-data-custom\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204564 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7238de5-2d97-4467-b06f-937763173cac-config-data\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204588 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-config\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204611 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7238de5-2d97-4467-b06f-937763173cac-config-data-custom\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204634 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/752b8add-84e6-43cd-9907-6e190e335fe7-logs\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204654 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204687 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204706 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7238de5-2d97-4467-b06f-937763173cac-logs\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204736 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkclf\" (UniqueName: \"kubernetes.io/projected/d8f9c22d-356d-4c49-bc7f-054f480770ec-kube-api-access-gkclf\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204754 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7238de5-2d97-4467-b06f-937763173cac-combined-ca-bundle\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204769 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204786 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae6f131-7b0b-4146-a1ed-640ad6302dca-logs\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204821 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204838 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf68r\" (UniqueName: \"kubernetes.io/projected/752b8add-84e6-43cd-9907-6e190e335fe7-kube-api-access-cf68r\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204856 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204876 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data-custom\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204898 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-combined-ca-bundle\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204921 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204946 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.204963 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/752b8add-84e6-43cd-9907-6e190e335fe7-combined-ca-bundle\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.206069 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.206277 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae6f131-7b0b-4146-a1ed-640ad6302dca-logs\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.206849 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: E0226 22:17:41.206873 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-x8jqw ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" podUID="f829131c-e9b4-46c9-ad51-3b75e6ecfdc4" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.206976 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.207798 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-config\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.232386 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.231973 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.243946 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data-custom\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.248521 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-65889bccf5-h979r"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.252875 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8jqw\" (UniqueName: \"kubernetes.io/projected/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-kube-api-access-x8jqw\") pod \"dnsmasq-dns-848cf88cfc-f7nff\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.253816 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rvdb\" (UniqueName: \"kubernetes.io/projected/cae6f131-7b0b-4146-a1ed-640ad6302dca-kube-api-access-2rvdb\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.256157 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-combined-ca-bundle\") pod \"barbican-api-57dd4f98-9xlb7\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.271394 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.273136 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.290432 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308508 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/752b8add-84e6-43cd-9907-6e190e335fe7-combined-ca-bundle\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308600 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/752b8add-84e6-43cd-9907-6e190e335fe7-config-data\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308639 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxch\" (UniqueName: \"kubernetes.io/projected/f7238de5-2d97-4467-b06f-937763173cac-kube-api-access-mzxch\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308704 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/752b8add-84e6-43cd-9907-6e190e335fe7-config-data-custom\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308743 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7238de5-2d97-4467-b06f-937763173cac-config-data\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308769 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-config\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308803 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7238de5-2d97-4467-b06f-937763173cac-config-data-custom\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308835 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/752b8add-84e6-43cd-9907-6e190e335fe7-logs\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308874 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7238de5-2d97-4467-b06f-937763173cac-logs\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308915 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkclf\" (UniqueName: \"kubernetes.io/projected/d8f9c22d-356d-4c49-bc7f-054f480770ec-kube-api-access-gkclf\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308934 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7238de5-2d97-4467-b06f-937763173cac-combined-ca-bundle\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.308949 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.309003 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.309018 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf68r\" (UniqueName: \"kubernetes.io/projected/752b8add-84e6-43cd-9907-6e190e335fe7-kube-api-access-cf68r\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.309037 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.309073 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.315400 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7238de5-2d97-4467-b06f-937763173cac-config-data-custom\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.319314 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/752b8add-84e6-43cd-9907-6e190e335fe7-combined-ca-bundle\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.321522 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-config\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.322524 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/752b8add-84e6-43cd-9907-6e190e335fe7-logs\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.323183 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7238de5-2d97-4467-b06f-937763173cac-logs\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.325018 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.326630 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.327017 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.332239 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.332560 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/752b8add-84e6-43cd-9907-6e190e335fe7-config-data-custom\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.333682 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7238de5-2d97-4467-b06f-937763173cac-config-data\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.338643 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7238de5-2d97-4467-b06f-937763173cac-combined-ca-bundle\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.340969 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.342324 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/752b8add-84e6-43cd-9907-6e190e335fe7-config-data\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.344661 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf68r\" (UniqueName: \"kubernetes.io/projected/752b8add-84e6-43cd-9907-6e190e335fe7-kube-api-access-cf68r\") pod \"barbican-worker-65889bccf5-h979r\" (UID: \"752b8add-84e6-43cd-9907-6e190e335fe7\") " pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.345420 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxch\" (UniqueName: \"kubernetes.io/projected/f7238de5-2d97-4467-b06f-937763173cac-kube-api-access-mzxch\") pod \"barbican-keystone-listener-7696b9558b-vr9cd\" (UID: \"f7238de5-2d97-4467-b06f-937763173cac\") " pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.355957 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkclf\" (UniqueName: \"kubernetes.io/projected/d8f9c22d-356d-4c49-bc7f-054f480770ec-kube-api-access-gkclf\") pod \"dnsmasq-dns-6578955fd5-9b8rx\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.375861 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f66df7474-bqlkd"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.377840 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.402609 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f66df7474-bqlkd"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.413082 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.413144 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.413455 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data-custom\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.413522 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.413559 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-logs\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.413589 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-scripts\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.413991 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdsfl\" (UniqueName: \"kubernetes.io/projected/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-kube-api-access-jdsfl\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.496088 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.506965 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.517116 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.517300 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd7bs\" (UniqueName: \"kubernetes.io/projected/25964a73-e44b-492b-9c2d-35c2ae2e935a-kube-api-access-gd7bs\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.517688 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-combined-ca-bundle\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.517795 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data-custom\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.517892 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data-custom\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.518068 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.518238 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-logs\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.518576 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-scripts\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.518723 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25964a73-e44b-492b-9c2d-35c2ae2e935a-logs\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.518827 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdsfl\" (UniqueName: \"kubernetes.io/projected/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-kube-api-access-jdsfl\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.519086 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.519144 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.519341 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.520360 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-logs\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.522590 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-scripts\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.526443 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data-custom\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.527226 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.534206 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.538609 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65889bccf5-h979r" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.544245 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdsfl\" (UniqueName: \"kubernetes.io/projected/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-kube-api-access-jdsfl\") pod \"cinder-api-0\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.549295 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.559057 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.621746 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.622566 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd7bs\" (UniqueName: \"kubernetes.io/projected/25964a73-e44b-492b-9c2d-35c2ae2e935a-kube-api-access-gd7bs\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.622675 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-combined-ca-bundle\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.622722 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data-custom\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.623020 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25964a73-e44b-492b-9c2d-35c2ae2e935a-logs\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.624740 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25964a73-e44b-492b-9c2d-35c2ae2e935a-logs\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.630126 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-combined-ca-bundle\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.630246 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data-custom\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.632238 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.648661 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd7bs\" (UniqueName: \"kubernetes.io/projected/25964a73-e44b-492b-9c2d-35c2ae2e935a-kube-api-access-gd7bs\") pod \"barbican-api-6f66df7474-bqlkd\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.663431 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79f8b87c99-7mvnv"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.816291 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7b97844b46-5cn8n"] Feb 26 22:17:41 crc kubenswrapper[4910]: I0226 22:17:41.865832 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.222016 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79f8b87c99-7mvnv" event={"ID":"c9e24dbd-bccd-4c17-b640-b183d1f296e7","Type":"ContainerStarted","Data":"68c08ca4d10fcabe1a2ef5ecdca717c090b20a3e684a55f6fbbbe2cdbf3e1852"} Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.224958 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b97844b46-5cn8n" event={"ID":"79453bea-3afe-4822-a09c-734dba08b9ef","Type":"ContainerStarted","Data":"9f2308679bb29e733d579f3d888ef08a0006567588b0d4cb343f3aecc7472a63"} Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.227349 4910 generic.go:334] "Generic (PLEG): container finished" podID="1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" containerID="2c2826db128ffc0cf57feffdc85c783433554cbb7bb78d5c57ca1e4af3e7d85c" exitCode=0 Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.227420 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" event={"ID":"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea","Type":"ContainerDied","Data":"2c2826db128ffc0cf57feffdc85c783433554cbb7bb78d5c57ca1e4af3e7d85c"} Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.227495 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.248534 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.359489 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-sb\") pod \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.359662 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-swift-storage-0\") pod \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.359686 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-svc\") pod \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.359739 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8jqw\" (UniqueName: \"kubernetes.io/projected/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-kube-api-access-x8jqw\") pod \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.359884 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-config\") pod \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.359952 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-nb\") pod \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\" (UID: \"f829131c-e9b4-46c9-ad51-3b75e6ecfdc4\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.361733 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4" (UID: "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.361796 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4" (UID: "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.362323 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4" (UID: "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.362519 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-config" (OuterVolumeSpecName: "config") pod "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4" (UID: "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.362770 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4" (UID: "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.376017 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.400835 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-kube-api-access-x8jqw" (OuterVolumeSpecName: "kube-api-access-x8jqw") pod "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4" (UID: "f829131c-e9b4-46c9-ad51-3b75e6ecfdc4"). InnerVolumeSpecName "kube-api-access-x8jqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.462393 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-config\") pod \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.462465 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-sb\") pod \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.462590 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-nb\") pod \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.462623 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhbd9\" (UniqueName: \"kubernetes.io/projected/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-kube-api-access-qhbd9\") pod \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.462857 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-svc\") pod \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.462885 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-swift-storage-0\") pod \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\" (UID: \"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea\") " Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.463401 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.463418 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.463428 4910 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.463437 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8jqw\" (UniqueName: \"kubernetes.io/projected/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-kube-api-access-x8jqw\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.463449 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.463457 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.476309 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-kube-api-access-qhbd9" (OuterVolumeSpecName: "kube-api-access-qhbd9") pod "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" (UID: "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea"). InnerVolumeSpecName "kube-api-access-qhbd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.564863 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhbd9\" (UniqueName: \"kubernetes.io/projected/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-kube-api-access-qhbd9\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.566625 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" (UID: "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.571096 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" (UID: "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.588212 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" (UID: "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.588285 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" (UID: "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.644472 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-config" (OuterVolumeSpecName: "config") pod "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" (UID: "1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.668034 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.668265 4910 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.668338 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.668394 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.668452 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.871442 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5dd849cb94-qw888"] Feb 26 22:17:42 crc kubenswrapper[4910]: I0226 22:17:42.901778 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57dd4f98-9xlb7"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.259224 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79f8b87c99-7mvnv" event={"ID":"c9e24dbd-bccd-4c17-b640-b183d1f296e7","Type":"ContainerStarted","Data":"d80c378abc7853177c94b605e14b8a682779c7f09a5f07209350c22ddd686fa4"} Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.259611 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.295425 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.305842 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57dd4f98-9xlb7" event={"ID":"cae6f131-7b0b-4146-a1ed-640ad6302dca","Type":"ContainerStarted","Data":"998266ec0d2d40dabae2acdf29d95fabda8d4512cc367b4fdc9eee06bdbd4aee"} Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.305881 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57dd4f98-9xlb7" event={"ID":"cae6f131-7b0b-4146-a1ed-640ad6302dca","Type":"ContainerStarted","Data":"e10db6458e7204b47894ccb1a9ac66065da79b65a56da8a36853186307b58017"} Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.317449 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b97844b46-5cn8n" event={"ID":"79453bea-3afe-4822-a09c-734dba08b9ef","Type":"ContainerStarted","Data":"22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d"} Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.317497 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b97844b46-5cn8n" event={"ID":"79453bea-3afe-4822-a09c-734dba08b9ef","Type":"ContainerStarted","Data":"b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f"} Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.319397 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.319428 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.322807 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.331982 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" event={"ID":"64d0c766-199e-40bf-b21c-ed64d433a17d","Type":"ContainerStarted","Data":"4519e0e6a7492c140be90549512b32a01c7e9d96451710669966cf87156cd16d"} Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.333893 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-65889bccf5-h979r"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.351319 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79f8b87c99-7mvnv" podStartSLOduration=3.351295657 podStartE2EDuration="3.351295657s" podCreationTimestamp="2026-02-26 22:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:43.306119599 +0000 UTC m=+1348.385610140" watchObservedRunningTime="2026-02-26 22:17:43.351295657 +0000 UTC m=+1348.430786198" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.362449 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-f7nff" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.368242 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" event={"ID":"1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea","Type":"ContainerDied","Data":"de8ef617c8567ae0229c6b0376c86c06cc0be1366a2094f637eb89b121c2721e"} Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.368300 4910 scope.go:117] "RemoveContainer" containerID="2c2826db128ffc0cf57feffdc85c783433554cbb7bb78d5c57ca1e4af3e7d85c" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.368317 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-pjt4v" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.382843 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7696b9558b-vr9cd"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.383688 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7b97844b46-5cn8n" podStartSLOduration=3.383666738 podStartE2EDuration="3.383666738s" podCreationTimestamp="2026-02-26 22:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:43.337004749 +0000 UTC m=+1348.416495290" watchObservedRunningTime="2026-02-26 22:17:43.383666738 +0000 UTC m=+1348.463157279" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.472477 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f7nff"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.476829 4910 scope.go:117] "RemoveContainer" containerID="c415c6938b9542143a51f84d77ed95f82c6d8701f96f5d7df538215ac095f3f6" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.493281 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f7nff"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.515481 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-pjt4v"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.538740 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-pjt4v"] Feb 26 22:17:43 crc kubenswrapper[4910]: W0226 22:17:43.542237 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8f9c22d_356d_4c49_bc7f_054f480770ec.slice/crio-16c3b8ea06d7b468ee46eb1e615cc3dbfda5a0b0a55491481891470555d6cfe6 WatchSource:0}: Error finding container 16c3b8ea06d7b468ee46eb1e615cc3dbfda5a0b0a55491481891470555d6cfe6: Status 404 returned error can't find the container with id 16c3b8ea06d7b468ee46eb1e615cc3dbfda5a0b0a55491481891470555d6cfe6 Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.558891 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9b8rx"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.574721 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c8489996f-ljk4s"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.593978 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f66df7474-bqlkd"] Feb 26 22:17:43 crc kubenswrapper[4910]: W0226 22:17:43.645433 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae349f4d_1586_4ba1_9b81_e84503327e71.slice/crio-ef9521bbc499bf6d47cfa6f62ba9207ece01a8b3a300466dbe47404855f093cd WatchSource:0}: Error finding container ef9521bbc499bf6d47cfa6f62ba9207ece01a8b3a300466dbe47404855f093cd: Status 404 returned error can't find the container with id ef9521bbc499bf6d47cfa6f62ba9207ece01a8b3a300466dbe47404855f093cd Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.670229 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.939326 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" path="/var/lib/kubelet/pods/1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea/volumes" Feb 26 22:17:43 crc kubenswrapper[4910]: I0226 22:17:43.944526 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f829131c-e9b4-46c9-ad51-3b75e6ecfdc4" path="/var/lib/kubelet/pods/f829131c-e9b4-46c9-ad51-3b75e6ecfdc4/volumes" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.361638 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.361686 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.423191 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-68pwg" event={"ID":"865f4842-373e-4bc9-98cd-4ceabb03b9f9","Type":"ContainerStarted","Data":"58fb336d56abeada7aa3988e36e57a60f87372e8fc0fbefdedc16e65e5bd2be9"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.433655 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c8489996f-ljk4s" event={"ID":"42767c30-5b6b-4df6-9237-962c97165901","Type":"ContainerStarted","Data":"d848a6398501194a36a0a47400cd77546e40755551633d893b04e4e6c99d48c2"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.434972 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a","Type":"ContainerStarted","Data":"191c1723a14881e5feb5e81be2b31fdd1163482e930660f7b7051c25fb03f0ec"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.436229 4910 generic.go:334] "Generic (PLEG): container finished" podID="d8f9c22d-356d-4c49-bc7f-054f480770ec" containerID="d258b7cc7539c8b5434ef56335bac02a21cc0cb040e8f1a690c49f5d4b8c5d04" exitCode=0 Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.436287 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" event={"ID":"d8f9c22d-356d-4c49-bc7f-054f480770ec","Type":"ContainerDied","Data":"d258b7cc7539c8b5434ef56335bac02a21cc0cb040e8f1a690c49f5d4b8c5d04"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.436307 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" event={"ID":"d8f9c22d-356d-4c49-bc7f-054f480770ec","Type":"ContainerStarted","Data":"16c3b8ea06d7b468ee46eb1e615cc3dbfda5a0b0a55491481891470555d6cfe6"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.439136 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.439194 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.445741 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65889bccf5-h979r" event={"ID":"752b8add-84e6-43cd-9907-6e190e335fe7","Type":"ContainerStarted","Data":"c9b5fd76657b3994f94739a28721b7eb1692d939154e2a0e594d162b7b3b11c8"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.448026 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" event={"ID":"f7238de5-2d97-4467-b06f-937763173cac","Type":"ContainerStarted","Data":"8ac91f66a76d68f33c4bd0afdb68513e58a59132919e75085442901ad771e045"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.448463 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-68pwg" podStartSLOduration=3.945534457 podStartE2EDuration="58.448446129s" podCreationTimestamp="2026-02-26 22:16:46 +0000 UTC" firstStartedPulling="2026-02-26 22:16:48.656001108 +0000 UTC m=+1293.735491659" lastFinishedPulling="2026-02-26 22:17:43.15891279 +0000 UTC m=+1348.238403331" observedRunningTime="2026-02-26 22:17:44.438700488 +0000 UTC m=+1349.518191029" watchObservedRunningTime="2026-02-26 22:17:44.448446129 +0000 UTC m=+1349.527936660" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.450777 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57dd4f98-9xlb7" event={"ID":"cae6f131-7b0b-4146-a1ed-640ad6302dca","Type":"ContainerStarted","Data":"3cacdacff8e1199ddf5f9d3d2c14aea84c9ae3197afd95e78c2492f2cc925c05"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.452394 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.452420 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.455387 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f66df7474-bqlkd" event={"ID":"25964a73-e44b-492b-9c2d-35c2ae2e935a","Type":"ContainerStarted","Data":"ae32a9ba55280c8fb8a0963be3e71d0adedd1c612f3b9db2f7265ee90c42c365"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.455418 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f66df7474-bqlkd" event={"ID":"25964a73-e44b-492b-9c2d-35c2ae2e935a","Type":"ContainerStarted","Data":"588e7915757fa0a7f11fefcbff6a175b3201041fb8ec3acaac11fbc2bf555c80"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.455428 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f66df7474-bqlkd" event={"ID":"25964a73-e44b-492b-9c2d-35c2ae2e935a","Type":"ContainerStarted","Data":"5e3af63cae2fc82bfda54dab68ec436c35fe3fee7e0eee479db8ccbde8258e06"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.455958 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.455984 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.465226 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae349f4d-1586-4ba1-9b81-e84503327e71","Type":"ContainerStarted","Data":"ef9521bbc499bf6d47cfa6f62ba9207ece01a8b3a300466dbe47404855f093cd"} Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.491303 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57dd4f98-9xlb7" podStartSLOduration=4.491287942 podStartE2EDuration="4.491287942s" podCreationTimestamp="2026-02-26 22:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:44.487657451 +0000 UTC m=+1349.567147992" watchObservedRunningTime="2026-02-26 22:17:44.491287942 +0000 UTC m=+1349.570778483" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.519506 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f66df7474-bqlkd" podStartSLOduration=3.519487808 podStartE2EDuration="3.519487808s" podCreationTimestamp="2026-02-26 22:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:44.518392997 +0000 UTC m=+1349.597883538" watchObservedRunningTime="2026-02-26 22:17:44.519487808 +0000 UTC m=+1349.598978349" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.534955 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.536531 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.575263 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.610864 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 22:17:44 crc kubenswrapper[4910]: I0226 22:17:44.630303 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.478044 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a","Type":"ContainerStarted","Data":"05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217"} Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.478108 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.479575 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.479978 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.514975 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.737051 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.772207 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.820411 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56765bc48f-nmqd6"] Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.820715 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56765bc48f-nmqd6" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerName="neutron-api" containerID="cri-o://cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019" gracePeriod=30 Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.821726 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56765bc48f-nmqd6" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerName="neutron-httpd" containerID="cri-o://78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454" gracePeriod=30 Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.828884 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74c6855597-x8m7j"] Feb 26 22:17:45 crc kubenswrapper[4910]: E0226 22:17:45.829525 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" containerName="dnsmasq-dns" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.829543 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" containerName="dnsmasq-dns" Feb 26 22:17:45 crc kubenswrapper[4910]: E0226 22:17:45.829591 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" containerName="init" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.829601 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" containerName="init" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.829868 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7092c9-5c64-48ce-ac3b-a9dd5c4bd3ea" containerName="dnsmasq-dns" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.840390 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.855524 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c6855597-x8m7j"] Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.878981 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.967647 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-ovndb-tls-certs\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.967718 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-httpd-config\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.967750 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr75f\" (UniqueName: \"kubernetes.io/projected/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-kube-api-access-hr75f\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.967802 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-combined-ca-bundle\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.967823 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-config\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.967905 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-public-tls-certs\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:45 crc kubenswrapper[4910]: I0226 22:17:45.967937 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-internal-tls-certs\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.069243 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-internal-tls-certs\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.069335 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-ovndb-tls-certs\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.069370 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-httpd-config\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.069397 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr75f\" (UniqueName: \"kubernetes.io/projected/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-kube-api-access-hr75f\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.069437 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-combined-ca-bundle\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.069454 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-config\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.069517 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-public-tls-certs\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.077645 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-public-tls-certs\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.079617 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-config\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.081411 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-combined-ca-bundle\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.085817 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-ovndb-tls-certs\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.092677 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-httpd-config\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.095310 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-internal-tls-certs\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.096825 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr75f\" (UniqueName: \"kubernetes.io/projected/b3a1ade6-feae-41ab-97e4-120b9d55cfdf-kube-api-access-hr75f\") pod \"neutron-74c6855597-x8m7j\" (UID: \"b3a1ade6-feae-41ab-97e4-120b9d55cfdf\") " pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.177356 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.489743 4910 generic.go:334] "Generic (PLEG): container finished" podID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerID="78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454" exitCode=0 Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.489788 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56765bc48f-nmqd6" event={"ID":"7088af71-8215-43a1-b8e9-a23d8ff28d96","Type":"ContainerDied","Data":"78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454"} Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.491134 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 22:17:46 crc kubenswrapper[4910]: I0226 22:17:46.507395 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.278122 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-57dd4f98-9xlb7"] Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.315887 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-68496f8578-p9hfm"] Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.317498 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.323491 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.323709 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.336574 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68496f8578-p9hfm"] Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.507311 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.507536 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.513500 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-public-tls-certs\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.551309 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-config-data-custom\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.551452 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-config-data\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.551518 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265c7bd0-7cd6-46dc-a186-b3039ae95224-logs\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.551906 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-combined-ca-bundle\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.551935 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-internal-tls-certs\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.552025 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjq7k\" (UniqueName: \"kubernetes.io/projected/265c7bd0-7cd6-46dc-a186-b3039ae95224-kube-api-access-qjq7k\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.514808 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-57dd4f98-9xlb7" podUID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerName="barbican-api-log" containerID="cri-o://998266ec0d2d40dabae2acdf29d95fabda8d4512cc367b4fdc9eee06bdbd4aee" gracePeriod=30 Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.515701 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-57dd4f98-9xlb7" podUID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerName="barbican-api" containerID="cri-o://3cacdacff8e1199ddf5f9d3d2c14aea84c9ae3197afd95e78c2492f2cc925c05" gracePeriod=30 Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.507386 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.552384 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.516084 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-56765bc48f-nmqd6" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.177:9696/\": dial tcp 10.217.0.177:9696: connect: connection refused" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.653661 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265c7bd0-7cd6-46dc-a186-b3039ae95224-logs\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.653879 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-combined-ca-bundle\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.653898 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-internal-tls-certs\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.653952 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjq7k\" (UniqueName: \"kubernetes.io/projected/265c7bd0-7cd6-46dc-a186-b3039ae95224-kube-api-access-qjq7k\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.654011 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-public-tls-certs\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.654057 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-config-data-custom\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.654114 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-config-data\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.654965 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265c7bd0-7cd6-46dc-a186-b3039ae95224-logs\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.660762 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-public-tls-certs\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.662193 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-combined-ca-bundle\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.665672 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-config-data-custom\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.668372 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjq7k\" (UniqueName: \"kubernetes.io/projected/265c7bd0-7cd6-46dc-a186-b3039ae95224-kube-api-access-qjq7k\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.670613 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-config-data\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.674946 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/265c7bd0-7cd6-46dc-a186-b3039ae95224-internal-tls-certs\") pod \"barbican-api-68496f8578-p9hfm\" (UID: \"265c7bd0-7cd6-46dc-a186-b3039ae95224\") " pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:47 crc kubenswrapper[4910]: I0226 22:17:47.912207 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.017011 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c6855597-x8m7j"] Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.381940 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.440215 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.597639 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65889bccf5-h979r" event={"ID":"752b8add-84e6-43cd-9907-6e190e335fe7","Type":"ContainerStarted","Data":"a27d5b3f1688a640e7dd6d8fb6c4da781ff16263c0572877300c7009a3fd1442"} Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.600583 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6855597-x8m7j" event={"ID":"b3a1ade6-feae-41ab-97e4-120b9d55cfdf","Type":"ContainerStarted","Data":"08dade36d711f1ee016e518f8b7aec98bcf13e1d5516a861be04093582e5e31e"} Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.604911 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68496f8578-p9hfm"] Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.613805 4910 generic.go:334] "Generic (PLEG): container finished" podID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerID="3cacdacff8e1199ddf5f9d3d2c14aea84c9ae3197afd95e78c2492f2cc925c05" exitCode=0 Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.613834 4910 generic.go:334] "Generic (PLEG): container finished" podID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerID="998266ec0d2d40dabae2acdf29d95fabda8d4512cc367b4fdc9eee06bdbd4aee" exitCode=143 Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.613976 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57dd4f98-9xlb7" event={"ID":"cae6f131-7b0b-4146-a1ed-640ad6302dca","Type":"ContainerDied","Data":"3cacdacff8e1199ddf5f9d3d2c14aea84c9ae3197afd95e78c2492f2cc925c05"} Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.614029 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57dd4f98-9xlb7" event={"ID":"cae6f131-7b0b-4146-a1ed-640ad6302dca","Type":"ContainerDied","Data":"998266ec0d2d40dabae2acdf29d95fabda8d4512cc367b4fdc9eee06bdbd4aee"} Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.614040 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57dd4f98-9xlb7" event={"ID":"cae6f131-7b0b-4146-a1ed-640ad6302dca","Type":"ContainerDied","Data":"e10db6458e7204b47894ccb1a9ac66065da79b65a56da8a36853186307b58017"} Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.614049 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10db6458e7204b47894ccb1a9ac66065da79b65a56da8a36853186307b58017" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.628971 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.629518 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" event={"ID":"64d0c766-199e-40bf-b21c-ed64d433a17d","Type":"ContainerStarted","Data":"415c2a4506937616be89a8e65a049d3b9490e13f122541e80d0c1544e8845fd3"} Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.635606 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" event={"ID":"f7238de5-2d97-4467-b06f-937763173cac","Type":"ContainerStarted","Data":"159a8bbaa293901f1f74df70000efa06e0946da32bc32bac7d183441074426e1"} Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.642644 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c8489996f-ljk4s" event={"ID":"42767c30-5b6b-4df6-9237-962c97165901","Type":"ContainerStarted","Data":"f3a123c7baf760b62af8766dee9492e21817cae5e5d27366dacea89bee16d7de"} Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.663649 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" event={"ID":"d8f9c22d-356d-4c49-bc7f-054f480770ec","Type":"ContainerStarted","Data":"9f86891f1b4382c0250b50f5bd259177d00860c59e31f57ee763293c4d35c44c"} Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.664621 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.685991 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7c8489996f-ljk4s" podStartSLOduration=4.87796891 podStartE2EDuration="8.685972952s" podCreationTimestamp="2026-02-26 22:17:40 +0000 UTC" firstStartedPulling="2026-02-26 22:17:43.638403072 +0000 UTC m=+1348.717893613" lastFinishedPulling="2026-02-26 22:17:47.446407114 +0000 UTC m=+1352.525897655" observedRunningTime="2026-02-26 22:17:48.676116498 +0000 UTC m=+1353.755607039" watchObservedRunningTime="2026-02-26 22:17:48.685972952 +0000 UTC m=+1353.765463493" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.722777 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" podStartSLOduration=8.722755658 podStartE2EDuration="8.722755658s" podCreationTimestamp="2026-02-26 22:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:48.707897253 +0000 UTC m=+1353.787387794" watchObservedRunningTime="2026-02-26 22:17:48.722755658 +0000 UTC m=+1353.802246199" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.811529 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data\") pod \"cae6f131-7b0b-4146-a1ed-640ad6302dca\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.811626 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-combined-ca-bundle\") pod \"cae6f131-7b0b-4146-a1ed-640ad6302dca\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.811757 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae6f131-7b0b-4146-a1ed-640ad6302dca-logs\") pod \"cae6f131-7b0b-4146-a1ed-640ad6302dca\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.811793 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rvdb\" (UniqueName: \"kubernetes.io/projected/cae6f131-7b0b-4146-a1ed-640ad6302dca-kube-api-access-2rvdb\") pod \"cae6f131-7b0b-4146-a1ed-640ad6302dca\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.811820 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data-custom\") pod \"cae6f131-7b0b-4146-a1ed-640ad6302dca\" (UID: \"cae6f131-7b0b-4146-a1ed-640ad6302dca\") " Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.813023 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae6f131-7b0b-4146-a1ed-640ad6302dca-logs" (OuterVolumeSpecName: "logs") pod "cae6f131-7b0b-4146-a1ed-640ad6302dca" (UID: "cae6f131-7b0b-4146-a1ed-640ad6302dca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.825327 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cae6f131-7b0b-4146-a1ed-640ad6302dca" (UID: "cae6f131-7b0b-4146-a1ed-640ad6302dca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.828374 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae6f131-7b0b-4146-a1ed-640ad6302dca-kube-api-access-2rvdb" (OuterVolumeSpecName: "kube-api-access-2rvdb") pod "cae6f131-7b0b-4146-a1ed-640ad6302dca" (UID: "cae6f131-7b0b-4146-a1ed-640ad6302dca"). InnerVolumeSpecName "kube-api-access-2rvdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.916003 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae6f131-7b0b-4146-a1ed-640ad6302dca-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.916321 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rvdb\" (UniqueName: \"kubernetes.io/projected/cae6f131-7b0b-4146-a1ed-640ad6302dca-kube-api-access-2rvdb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.916336 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.940397 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cae6f131-7b0b-4146-a1ed-640ad6302dca" (UID: "cae6f131-7b0b-4146-a1ed-640ad6302dca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:48 crc kubenswrapper[4910]: I0226 22:17:48.963548 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data" (OuterVolumeSpecName: "config-data") pod "cae6f131-7b0b-4146-a1ed-640ad6302dca" (UID: "cae6f131-7b0b-4146-a1ed-640ad6302dca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.020734 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.020778 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6f131-7b0b-4146-a1ed-640ad6302dca-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.408087 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.408394 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.438500 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.676177 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6855597-x8m7j" event={"ID":"b3a1ade6-feae-41ab-97e4-120b9d55cfdf","Type":"ContainerStarted","Data":"145637da5e0a4949529036a4c0784dc075f1e721bddce44a33d5dd384a18a6a4"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.691497 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae349f4d-1586-4ba1-9b81-e84503327e71","Type":"ContainerStarted","Data":"955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.699432 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.714395 4910 generic.go:334] "Generic (PLEG): container finished" podID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerID="cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019" exitCode=0 Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.714482 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56765bc48f-nmqd6" event={"ID":"7088af71-8215-43a1-b8e9-a23d8ff28d96","Type":"ContainerDied","Data":"cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.714513 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56765bc48f-nmqd6" event={"ID":"7088af71-8215-43a1-b8e9-a23d8ff28d96","Type":"ContainerDied","Data":"7b199dc23c816a733c2dcad64faa696f87daf4453161b57a6cace6bce6d5f1d0"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.714532 4910 scope.go:117] "RemoveContainer" containerID="78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.728840 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" event={"ID":"64d0c766-199e-40bf-b21c-ed64d433a17d","Type":"ContainerStarted","Data":"6e2ae80df7d1ccbfe665705df9f314e36ba2b3d0a83aebfe67c580c477084175"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.766730 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" podStartSLOduration=5.250529596 podStartE2EDuration="9.766708618s" podCreationTimestamp="2026-02-26 22:17:40 +0000 UTC" firstStartedPulling="2026-02-26 22:17:42.891224436 +0000 UTC m=+1347.970714977" lastFinishedPulling="2026-02-26 22:17:47.407403458 +0000 UTC m=+1352.486893999" observedRunningTime="2026-02-26 22:17:49.759281372 +0000 UTC m=+1354.838771913" watchObservedRunningTime="2026-02-26 22:17:49.766708618 +0000 UTC m=+1354.846199159" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.770498 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c8489996f-ljk4s" event={"ID":"42767c30-5b6b-4df6-9237-962c97165901","Type":"ContainerStarted","Data":"c32ea19b4c1bfa717248bbe7310903f17a05731c89f8e291bd67174767e9262b"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.780618 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65889bccf5-h979r" event={"ID":"752b8add-84e6-43cd-9907-6e190e335fe7","Type":"ContainerStarted","Data":"cf9f3ec21ef73b17972e2acce10189b9f78e592a865d79067db67e6836223815"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.796300 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" event={"ID":"f7238de5-2d97-4467-b06f-937763173cac","Type":"ContainerStarted","Data":"4e52528b85071c29ecc3971fc2050cab07480de0f06400eb2f8b910e1e92f902"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.818258 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a","Type":"ContainerStarted","Data":"541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.818401 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerName="cinder-api-log" containerID="cri-o://05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217" gracePeriod=30 Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.818471 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.818498 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerName="cinder-api" containerID="cri-o://541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92" gracePeriod=30 Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.830046 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57dd4f98-9xlb7" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.830043 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68496f8578-p9hfm" event={"ID":"265c7bd0-7cd6-46dc-a186-b3039ae95224","Type":"ContainerStarted","Data":"d531c050d5c502a0d01f957fa0f3268703b4a8a31a1e4f64716bdc6b150d1cdc"} Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.851488 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-httpd-config\") pod \"7088af71-8215-43a1-b8e9-a23d8ff28d96\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.851555 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-config\") pod \"7088af71-8215-43a1-b8e9-a23d8ff28d96\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.851629 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-public-tls-certs\") pod \"7088af71-8215-43a1-b8e9-a23d8ff28d96\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.852200 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-65889bccf5-h979r" podStartSLOduration=5.861375296 podStartE2EDuration="9.852182438s" podCreationTimestamp="2026-02-26 22:17:40 +0000 UTC" firstStartedPulling="2026-02-26 22:17:43.472852182 +0000 UTC m=+1348.552342723" lastFinishedPulling="2026-02-26 22:17:47.463659324 +0000 UTC m=+1352.543149865" observedRunningTime="2026-02-26 22:17:49.81237368 +0000 UTC m=+1354.891864221" watchObservedRunningTime="2026-02-26 22:17:49.852182438 +0000 UTC m=+1354.931672979" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.856578 4910 scope.go:117] "RemoveContainer" containerID="cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.857690 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7c8489996f-ljk4s"] Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.874908 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7696b9558b-vr9cd" podStartSLOduration=5.885739186 podStartE2EDuration="9.874889992s" podCreationTimestamp="2026-02-26 22:17:40 +0000 UTC" firstStartedPulling="2026-02-26 22:17:43.473113469 +0000 UTC m=+1348.552604000" lastFinishedPulling="2026-02-26 22:17:47.462264265 +0000 UTC m=+1352.541754806" observedRunningTime="2026-02-26 22:17:49.839236338 +0000 UTC m=+1354.918726879" watchObservedRunningTime="2026-02-26 22:17:49.874889992 +0000 UTC m=+1354.954380533" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.907134 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-ovndb-tls-certs\") pod \"7088af71-8215-43a1-b8e9-a23d8ff28d96\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.910396 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zfrp\" (UniqueName: \"kubernetes.io/projected/7088af71-8215-43a1-b8e9-a23d8ff28d96-kube-api-access-8zfrp\") pod \"7088af71-8215-43a1-b8e9-a23d8ff28d96\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.910450 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-combined-ca-bundle\") pod \"7088af71-8215-43a1-b8e9-a23d8ff28d96\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.910538 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-internal-tls-certs\") pod \"7088af71-8215-43a1-b8e9-a23d8ff28d96\" (UID: \"7088af71-8215-43a1-b8e9-a23d8ff28d96\") " Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.911697 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7088af71-8215-43a1-b8e9-a23d8ff28d96" (UID: "7088af71-8215-43a1-b8e9-a23d8ff28d96"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:49 crc kubenswrapper[4910]: I0226 22:17:49.913044 4910 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.000277 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7088af71-8215-43a1-b8e9-a23d8ff28d96-kube-api-access-8zfrp" (OuterVolumeSpecName: "kube-api-access-8zfrp") pod "7088af71-8215-43a1-b8e9-a23d8ff28d96" (UID: "7088af71-8215-43a1-b8e9-a23d8ff28d96"). InnerVolumeSpecName "kube-api-access-8zfrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.004688 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.004669845 podStartE2EDuration="9.004669845s" podCreationTimestamp="2026-02-26 22:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:49.880896439 +0000 UTC m=+1354.960386980" watchObservedRunningTime="2026-02-26 22:17:50.004669845 +0000 UTC m=+1355.084160376" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.017011 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zfrp\" (UniqueName: \"kubernetes.io/projected/7088af71-8215-43a1-b8e9-a23d8ff28d96-kube-api-access-8zfrp\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.084361 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7088af71-8215-43a1-b8e9-a23d8ff28d96" (UID: "7088af71-8215-43a1-b8e9-a23d8ff28d96"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.101567 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7088af71-8215-43a1-b8e9-a23d8ff28d96" (UID: "7088af71-8215-43a1-b8e9-a23d8ff28d96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.119703 4910 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.119728 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.122054 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5dd849cb94-qw888"] Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.170175 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-57dd4f98-9xlb7"] Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.172299 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-config" (OuterVolumeSpecName: "config") pod "7088af71-8215-43a1-b8e9-a23d8ff28d96" (UID: "7088af71-8215-43a1-b8e9-a23d8ff28d96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.172780 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7088af71-8215-43a1-b8e9-a23d8ff28d96" (UID: "7088af71-8215-43a1-b8e9-a23d8ff28d96"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.203811 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-57dd4f98-9xlb7"] Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.208229 4910 scope.go:117] "RemoveContainer" containerID="78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.210023 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7088af71-8215-43a1-b8e9-a23d8ff28d96" (UID: "7088af71-8215-43a1-b8e9-a23d8ff28d96"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:50 crc kubenswrapper[4910]: E0226 22:17:50.210125 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454\": container with ID starting with 78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454 not found: ID does not exist" containerID="78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.210180 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454"} err="failed to get container status \"78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454\": rpc error: code = NotFound desc = could not find container \"78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454\": container with ID starting with 78596e1da9f542447f4d12ee7de6a7fe017b62c4f7fd1f6668608f62c21bf454 not found: ID does not exist" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.210206 4910 scope.go:117] "RemoveContainer" containerID="cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019" Feb 26 22:17:50 crc kubenswrapper[4910]: E0226 22:17:50.211275 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019\": container with ID starting with cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019 not found: ID does not exist" containerID="cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.211298 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019"} err="failed to get container status \"cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019\": rpc error: code = NotFound desc = could not find container \"cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019\": container with ID starting with cf8fa30fbe6b583625ac35bf385fcef0597b8eb3ae6cbfaed4287804007a4019 not found: ID does not exist" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.228328 4910 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.228360 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.228370 4910 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7088af71-8215-43a1-b8e9-a23d8ff28d96-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.839517 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae349f4d-1586-4ba1-9b81-e84503327e71","Type":"ContainerStarted","Data":"88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44"} Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.842955 4910 generic.go:334] "Generic (PLEG): container finished" podID="865f4842-373e-4bc9-98cd-4ceabb03b9f9" containerID="58fb336d56abeada7aa3988e36e57a60f87372e8fc0fbefdedc16e65e5bd2be9" exitCode=0 Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.843011 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-68pwg" event={"ID":"865f4842-373e-4bc9-98cd-4ceabb03b9f9","Type":"ContainerDied","Data":"58fb336d56abeada7aa3988e36e57a60f87372e8fc0fbefdedc16e65e5bd2be9"} Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.844615 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56765bc48f-nmqd6" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.850328 4910 generic.go:334] "Generic (PLEG): container finished" podID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerID="05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217" exitCode=143 Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.850381 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a","Type":"ContainerDied","Data":"05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217"} Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.851578 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6855597-x8m7j" event={"ID":"b3a1ade6-feae-41ab-97e4-120b9d55cfdf","Type":"ContainerStarted","Data":"d6418c168fe124bae8728b22266e961b6daf18d6574a49354003574b0850c3ff"} Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.852143 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.854411 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68496f8578-p9hfm" event={"ID":"265c7bd0-7cd6-46dc-a186-b3039ae95224","Type":"ContainerStarted","Data":"7ca946504ed609dc6f2e67559b9bab74b8d16af1990d5c309458ace56f1264e5"} Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.854434 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68496f8578-p9hfm" event={"ID":"265c7bd0-7cd6-46dc-a186-b3039ae95224","Type":"ContainerStarted","Data":"bc9c001c6664536a53f9e57e47b57034de65ab23b04f76c6279cdba7de2ed3e3"} Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.854448 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.855991 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.860931 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.088302183 podStartE2EDuration="10.86091941s" podCreationTimestamp="2026-02-26 22:17:40 +0000 UTC" firstStartedPulling="2026-02-26 22:17:43.688796945 +0000 UTC m=+1348.768287486" lastFinishedPulling="2026-02-26 22:17:47.461414172 +0000 UTC m=+1352.540904713" observedRunningTime="2026-02-26 22:17:50.86022068 +0000 UTC m=+1355.939711221" watchObservedRunningTime="2026-02-26 22:17:50.86091941 +0000 UTC m=+1355.940409951" Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.892359 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56765bc48f-nmqd6"] Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.921882 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56765bc48f-nmqd6"] Feb 26 22:17:50 crc kubenswrapper[4910]: I0226 22:17:50.922613 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74c6855597-x8m7j" podStartSLOduration=5.922587227 podStartE2EDuration="5.922587227s" podCreationTimestamp="2026-02-26 22:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:50.898069574 +0000 UTC m=+1355.977560115" watchObservedRunningTime="2026-02-26 22:17:50.922587227 +0000 UTC m=+1356.002077778" Feb 26 22:17:51 crc kubenswrapper[4910]: I0226 22:17:50.994945 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-68496f8578-p9hfm" podStartSLOduration=3.994919211 podStartE2EDuration="3.994919211s" podCreationTimestamp="2026-02-26 22:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:17:50.927133613 +0000 UTC m=+1356.006624154" watchObservedRunningTime="2026-02-26 22:17:50.994919211 +0000 UTC m=+1356.074409752" Feb 26 22:17:51 crc kubenswrapper[4910]: I0226 22:17:51.053876 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 22:17:51 crc kubenswrapper[4910]: I0226 22:17:51.862572 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" podUID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerName="barbican-keystone-listener-log" containerID="cri-o://415c2a4506937616be89a8e65a049d3b9490e13f122541e80d0c1544e8845fd3" gracePeriod=30 Feb 26 22:17:51 crc kubenswrapper[4910]: I0226 22:17:51.862886 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" podUID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerName="barbican-keystone-listener" containerID="cri-o://6e2ae80df7d1ccbfe665705df9f314e36ba2b3d0a83aebfe67c580c477084175" gracePeriod=30 Feb 26 22:17:51 crc kubenswrapper[4910]: I0226 22:17:51.863043 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7c8489996f-ljk4s" podUID="42767c30-5b6b-4df6-9237-962c97165901" containerName="barbican-worker-log" containerID="cri-o://f3a123c7baf760b62af8766dee9492e21817cae5e5d27366dacea89bee16d7de" gracePeriod=30 Feb 26 22:17:51 crc kubenswrapper[4910]: I0226 22:17:51.863078 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7c8489996f-ljk4s" podUID="42767c30-5b6b-4df6-9237-962c97165901" containerName="barbican-worker" containerID="cri-o://c32ea19b4c1bfa717248bbe7310903f17a05731c89f8e291bd67174767e9262b" gracePeriod=30 Feb 26 22:17:51 crc kubenswrapper[4910]: I0226 22:17:51.918358 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" path="/var/lib/kubelet/pods/7088af71-8215-43a1-b8e9-a23d8ff28d96/volumes" Feb 26 22:17:51 crc kubenswrapper[4910]: I0226 22:17:51.919356 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae6f131-7b0b-4146-a1ed-640ad6302dca" path="/var/lib/kubelet/pods/cae6f131-7b0b-4146-a1ed-640ad6302dca/volumes" Feb 26 22:17:52 crc kubenswrapper[4910]: I0226 22:17:52.882124 4910 generic.go:334] "Generic (PLEG): container finished" podID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerID="6e2ae80df7d1ccbfe665705df9f314e36ba2b3d0a83aebfe67c580c477084175" exitCode=0 Feb 26 22:17:52 crc kubenswrapper[4910]: I0226 22:17:52.882373 4910 generic.go:334] "Generic (PLEG): container finished" podID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerID="415c2a4506937616be89a8e65a049d3b9490e13f122541e80d0c1544e8845fd3" exitCode=143 Feb 26 22:17:52 crc kubenswrapper[4910]: I0226 22:17:52.882337 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" event={"ID":"64d0c766-199e-40bf-b21c-ed64d433a17d","Type":"ContainerDied","Data":"6e2ae80df7d1ccbfe665705df9f314e36ba2b3d0a83aebfe67c580c477084175"} Feb 26 22:17:52 crc kubenswrapper[4910]: I0226 22:17:52.882430 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" event={"ID":"64d0c766-199e-40bf-b21c-ed64d433a17d","Type":"ContainerDied","Data":"415c2a4506937616be89a8e65a049d3b9490e13f122541e80d0c1544e8845fd3"} Feb 26 22:17:52 crc kubenswrapper[4910]: I0226 22:17:52.890879 4910 generic.go:334] "Generic (PLEG): container finished" podID="42767c30-5b6b-4df6-9237-962c97165901" containerID="f3a123c7baf760b62af8766dee9492e21817cae5e5d27366dacea89bee16d7de" exitCode=143 Feb 26 22:17:52 crc kubenswrapper[4910]: I0226 22:17:52.890949 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c8489996f-ljk4s" event={"ID":"42767c30-5b6b-4df6-9237-962c97165901","Type":"ContainerDied","Data":"f3a123c7baf760b62af8766dee9492e21817cae5e5d27366dacea89bee16d7de"} Feb 26 22:17:53 crc kubenswrapper[4910]: I0226 22:17:53.262371 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:53 crc kubenswrapper[4910]: I0226 22:17:53.436713 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:17:53 crc kubenswrapper[4910]: I0226 22:17:53.911026 4910 generic.go:334] "Generic (PLEG): container finished" podID="42767c30-5b6b-4df6-9237-962c97165901" containerID="c32ea19b4c1bfa717248bbe7310903f17a05731c89f8e291bd67174767e9262b" exitCode=0 Feb 26 22:17:53 crc kubenswrapper[4910]: I0226 22:17:53.914701 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c8489996f-ljk4s" event={"ID":"42767c30-5b6b-4df6-9237-962c97165901","Type":"ContainerDied","Data":"c32ea19b4c1bfa717248bbe7310903f17a05731c89f8e291bd67174767e9262b"} Feb 26 22:17:55 crc kubenswrapper[4910]: I0226 22:17:55.728666 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:17:55 crc kubenswrapper[4910]: I0226 22:17:55.728994 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:17:55 crc kubenswrapper[4910]: I0226 22:17:55.729060 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:17:55 crc kubenswrapper[4910]: I0226 22:17:55.730211 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86111bdf5fb42a19cad2fb6eff7efddfcb0bd79e217fa1c7fe5451bfc269072f"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:17:55 crc kubenswrapper[4910]: I0226 22:17:55.730318 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://86111bdf5fb42a19cad2fb6eff7efddfcb0bd79e217fa1c7fe5451bfc269072f" gracePeriod=600 Feb 26 22:17:55 crc kubenswrapper[4910]: I0226 22:17:55.943416 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="86111bdf5fb42a19cad2fb6eff7efddfcb0bd79e217fa1c7fe5451bfc269072f" exitCode=0 Feb 26 22:17:55 crc kubenswrapper[4910]: I0226 22:17:55.943489 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"86111bdf5fb42a19cad2fb6eff7efddfcb0bd79e217fa1c7fe5451bfc269072f"} Feb 26 22:17:55 crc kubenswrapper[4910]: I0226 22:17:55.943731 4910 scope.go:117] "RemoveContainer" containerID="8ea55f05369c1b8e8cc6600dec4dd7568856f1c31173a49e14886d4d1e1c338d" Feb 26 22:17:56 crc kubenswrapper[4910]: I0226 22:17:56.329856 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 22:17:56 crc kubenswrapper[4910]: I0226 22:17:56.367125 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 22:17:56 crc kubenswrapper[4910]: I0226 22:17:56.509327 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:17:56 crc kubenswrapper[4910]: I0226 22:17:56.601299 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mwkqc"] Feb 26 22:17:56 crc kubenswrapper[4910]: I0226 22:17:56.601535 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" podUID="c41a76f7-e78e-400a-b92b-aa690d360c6e" containerName="dnsmasq-dns" containerID="cri-o://d4318395ca9aadf5be619088bb9cd59e9aad0db0e9c99d426e02524bda1dd46c" gracePeriod=10 Feb 26 22:17:56 crc kubenswrapper[4910]: I0226 22:17:56.958068 4910 generic.go:334] "Generic (PLEG): container finished" podID="c41a76f7-e78e-400a-b92b-aa690d360c6e" containerID="d4318395ca9aadf5be619088bb9cd59e9aad0db0e9c99d426e02524bda1dd46c" exitCode=0 Feb 26 22:17:56 crc kubenswrapper[4910]: I0226 22:17:56.958367 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" event={"ID":"c41a76f7-e78e-400a-b92b-aa690d360c6e","Type":"ContainerDied","Data":"d4318395ca9aadf5be619088bb9cd59e9aad0db0e9c99d426e02524bda1dd46c"} Feb 26 22:17:56 crc kubenswrapper[4910]: I0226 22:17:56.958522 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerName="cinder-scheduler" containerID="cri-o://955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758" gracePeriod=30 Feb 26 22:17:56 crc kubenswrapper[4910]: I0226 22:17:56.958783 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerName="probe" containerID="cri-o://88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44" gracePeriod=30 Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.389592 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.524958 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-certs\") pod \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.525295 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-scripts\") pod \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.525327 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-combined-ca-bundle\") pod \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.525381 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqn9g\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-kube-api-access-tqn9g\") pod \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.525444 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-config-data\") pod \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\" (UID: \"865f4842-373e-4bc9-98cd-4ceabb03b9f9\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.541383 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-certs" (OuterVolumeSpecName: "certs") pod "865f4842-373e-4bc9-98cd-4ceabb03b9f9" (UID: "865f4842-373e-4bc9-98cd-4ceabb03b9f9"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.551352 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-kube-api-access-tqn9g" (OuterVolumeSpecName: "kube-api-access-tqn9g") pod "865f4842-373e-4bc9-98cd-4ceabb03b9f9" (UID: "865f4842-373e-4bc9-98cd-4ceabb03b9f9"). InnerVolumeSpecName "kube-api-access-tqn9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.562885 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-scripts" (OuterVolumeSpecName: "scripts") pod "865f4842-373e-4bc9-98cd-4ceabb03b9f9" (UID: "865f4842-373e-4bc9-98cd-4ceabb03b9f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.607397 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-config-data" (OuterVolumeSpecName: "config-data") pod "865f4842-373e-4bc9-98cd-4ceabb03b9f9" (UID: "865f4842-373e-4bc9-98cd-4ceabb03b9f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.628386 4910 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.628416 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.628425 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqn9g\" (UniqueName: \"kubernetes.io/projected/865f4842-373e-4bc9-98cd-4ceabb03b9f9-kube-api-access-tqn9g\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.628436 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.631258 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "865f4842-373e-4bc9-98cd-4ceabb03b9f9" (UID: "865f4842-373e-4bc9-98cd-4ceabb03b9f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.739152 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865f4842-373e-4bc9-98cd-4ceabb03b9f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.768877 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.841940 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m49gn\" (UniqueName: \"kubernetes.io/projected/42767c30-5b6b-4df6-9237-962c97165901-kube-api-access-m49gn\") pod \"42767c30-5b6b-4df6-9237-962c97165901\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.842010 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data\") pod \"42767c30-5b6b-4df6-9237-962c97165901\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.842146 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-combined-ca-bundle\") pod \"42767c30-5b6b-4df6-9237-962c97165901\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.842218 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data-custom\") pod \"42767c30-5b6b-4df6-9237-962c97165901\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.842333 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42767c30-5b6b-4df6-9237-962c97165901-logs\") pod \"42767c30-5b6b-4df6-9237-962c97165901\" (UID: \"42767c30-5b6b-4df6-9237-962c97165901\") " Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.845813 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42767c30-5b6b-4df6-9237-962c97165901-logs" (OuterVolumeSpecName: "logs") pod "42767c30-5b6b-4df6-9237-962c97165901" (UID: "42767c30-5b6b-4df6-9237-962c97165901"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.859920 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42767c30-5b6b-4df6-9237-962c97165901-kube-api-access-m49gn" (OuterVolumeSpecName: "kube-api-access-m49gn") pod "42767c30-5b6b-4df6-9237-962c97165901" (UID: "42767c30-5b6b-4df6-9237-962c97165901"). InnerVolumeSpecName "kube-api-access-m49gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.862068 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "42767c30-5b6b-4df6-9237-962c97165901" (UID: "42767c30-5b6b-4df6-9237-962c97165901"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.929214 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data" (OuterVolumeSpecName: "config-data") pod "42767c30-5b6b-4df6-9237-962c97165901" (UID: "42767c30-5b6b-4df6-9237-962c97165901"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.944982 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m49gn\" (UniqueName: \"kubernetes.io/projected/42767c30-5b6b-4df6-9237-962c97165901-kube-api-access-m49gn\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.945020 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.945030 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.945039 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42767c30-5b6b-4df6-9237-962c97165901-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.950227 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.953658 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.987053 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-68pwg" event={"ID":"865f4842-373e-4bc9-98cd-4ceabb03b9f9","Type":"ContainerDied","Data":"69d91b00e388791c004ff8e0082135b75f8b7013cc4f8799fa43a16da492e212"} Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.991201 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69d91b00e388791c004ff8e0082135b75f8b7013cc4f8799fa43a16da492e212" Feb 26 22:17:57 crc kubenswrapper[4910]: I0226 22:17:57.991452 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-68pwg" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.023531 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" event={"ID":"64d0c766-199e-40bf-b21c-ed64d433a17d","Type":"ContainerDied","Data":"4519e0e6a7492c140be90549512b32a01c7e9d96451710669966cf87156cd16d"} Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.023579 4910 scope.go:117] "RemoveContainer" containerID="6e2ae80df7d1ccbfe665705df9f314e36ba2b3d0a83aebfe67c580c477084175" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.023703 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd849cb94-qw888" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.050400 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42767c30-5b6b-4df6-9237-962c97165901" (UID: "42767c30-5b6b-4df6-9237-962c97165901"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.050990 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c8489996f-ljk4s" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.051184 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c8489996f-ljk4s" event={"ID":"42767c30-5b6b-4df6-9237-962c97165901","Type":"ContainerDied","Data":"d848a6398501194a36a0a47400cd77546e40755551633d893b04e4e6c99d48c2"} Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.055188 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-nb\") pod \"c41a76f7-e78e-400a-b92b-aa690d360c6e\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.055220 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-sb\") pod \"c41a76f7-e78e-400a-b92b-aa690d360c6e\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.055272 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data-custom\") pod \"64d0c766-199e-40bf-b21c-ed64d433a17d\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.055713 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcsjt\" (UniqueName: \"kubernetes.io/projected/64d0c766-199e-40bf-b21c-ed64d433a17d-kube-api-access-fcsjt\") pod \"64d0c766-199e-40bf-b21c-ed64d433a17d\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.055811 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d0c766-199e-40bf-b21c-ed64d433a17d-logs\") pod \"64d0c766-199e-40bf-b21c-ed64d433a17d\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.056785 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-config\") pod \"c41a76f7-e78e-400a-b92b-aa690d360c6e\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.056857 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjv4p\" (UniqueName: \"kubernetes.io/projected/c41a76f7-e78e-400a-b92b-aa690d360c6e-kube-api-access-zjv4p\") pod \"c41a76f7-e78e-400a-b92b-aa690d360c6e\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.056891 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-combined-ca-bundle\") pod \"64d0c766-199e-40bf-b21c-ed64d433a17d\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.056971 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data\") pod \"64d0c766-199e-40bf-b21c-ed64d433a17d\" (UID: \"64d0c766-199e-40bf-b21c-ed64d433a17d\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.057037 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-dns-svc\") pod \"c41a76f7-e78e-400a-b92b-aa690d360c6e\" (UID: \"c41a76f7-e78e-400a-b92b-aa690d360c6e\") " Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.058337 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42767c30-5b6b-4df6-9237-962c97165901-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.062220 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64d0c766-199e-40bf-b21c-ed64d433a17d-logs" (OuterVolumeSpecName: "logs") pod "64d0c766-199e-40bf-b21c-ed64d433a17d" (UID: "64d0c766-199e-40bf-b21c-ed64d433a17d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.124347 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64d0c766-199e-40bf-b21c-ed64d433a17d" (UID: "64d0c766-199e-40bf-b21c-ed64d433a17d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.124569 4910 scope.go:117] "RemoveContainer" containerID="415c2a4506937616be89a8e65a049d3b9490e13f122541e80d0c1544e8845fd3" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.124418 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41a76f7-e78e-400a-b92b-aa690d360c6e-kube-api-access-zjv4p" (OuterVolumeSpecName: "kube-api-access-zjv4p") pod "c41a76f7-e78e-400a-b92b-aa690d360c6e" (UID: "c41a76f7-e78e-400a-b92b-aa690d360c6e"). InnerVolumeSpecName "kube-api-access-zjv4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.150344 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d0c766-199e-40bf-b21c-ed64d433a17d-kube-api-access-fcsjt" (OuterVolumeSpecName: "kube-api-access-fcsjt") pod "64d0c766-199e-40bf-b21c-ed64d433a17d" (UID: "64d0c766-199e-40bf-b21c-ed64d433a17d"). InnerVolumeSpecName "kube-api-access-fcsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.158255 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerStarted","Data":"503a71679dbd7ced77c8db96b64cd96bb70a28b8ac8be1845a36025b796726eb"} Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.158616 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="ceilometer-central-agent" containerID="cri-o://03e2c6f282e58c8aadcfe18c580736f090b601212f9feb71ba11569170b03144" gracePeriod=30 Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.158858 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.158901 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="proxy-httpd" containerID="cri-o://503a71679dbd7ced77c8db96b64cd96bb70a28b8ac8be1845a36025b796726eb" gracePeriod=30 Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.158942 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="sg-core" containerID="cri-o://86ba7cfa670f936b2c22be3638236e7de871d9a239017e74755304996f2a06cd" gracePeriod=30 Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.158976 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="ceilometer-notification-agent" containerID="cri-o://ee50a00d864892d417a029125914b9b85eb57efea05f62fda9f836ad89dd1c9a" gracePeriod=30 Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.172335 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.172377 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcsjt\" (UniqueName: \"kubernetes.io/projected/64d0c766-199e-40bf-b21c-ed64d433a17d-kube-api-access-fcsjt\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.172390 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64d0c766-199e-40bf-b21c-ed64d433a17d-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.172400 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjv4p\" (UniqueName: \"kubernetes.io/projected/c41a76f7-e78e-400a-b92b-aa690d360c6e-kube-api-access-zjv4p\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.190076 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7c8489996f-ljk4s"] Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.202690 4910 generic.go:334] "Generic (PLEG): container finished" podID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerID="88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44" exitCode=0 Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.202822 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae349f4d-1586-4ba1-9b81-e84503327e71","Type":"ContainerDied","Data":"88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44"} Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.213788 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"c9e7e9afe0afc45cb3107605182e65bb0e883988c1f2cfa35e317e7033cca07c"} Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.220682 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7c8489996f-ljk4s"] Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.225577 4910 scope.go:117] "RemoveContainer" containerID="c32ea19b4c1bfa717248bbe7310903f17a05731c89f8e291bd67174767e9262b" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.256602 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.456609994 podStartE2EDuration="1m12.256578911s" podCreationTimestamp="2026-02-26 22:16:46 +0000 UTC" firstStartedPulling="2026-02-26 22:16:48.654874157 +0000 UTC m=+1293.734364698" lastFinishedPulling="2026-02-26 22:17:57.454843074 +0000 UTC m=+1362.534333615" observedRunningTime="2026-02-26 22:17:58.215297295 +0000 UTC m=+1363.294787836" watchObservedRunningTime="2026-02-26 22:17:58.256578911 +0000 UTC m=+1363.336069452" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.271432 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64d0c766-199e-40bf-b21c-ed64d433a17d" (UID: "64d0c766-199e-40bf-b21c-ed64d433a17d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.277833 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-config" (OuterVolumeSpecName: "config") pod "c41a76f7-e78e-400a-b92b-aa690d360c6e" (UID: "c41a76f7-e78e-400a-b92b-aa690d360c6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.278522 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.278548 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.279529 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" event={"ID":"c41a76f7-e78e-400a-b92b-aa690d360c6e","Type":"ContainerDied","Data":"d5bff6e206c463760a5738ce4e612d51a6d2a8807d9b75b010870e61a0305ea7"} Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.279667 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mwkqc" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.291472 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c41a76f7-e78e-400a-b92b-aa690d360c6e" (UID: "c41a76f7-e78e-400a-b92b-aa690d360c6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.326599 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c41a76f7-e78e-400a-b92b-aa690d360c6e" (UID: "c41a76f7-e78e-400a-b92b-aa690d360c6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.333303 4910 scope.go:117] "RemoveContainer" containerID="f3a123c7baf760b62af8766dee9492e21817cae5e5d27366dacea89bee16d7de" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.346650 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c41a76f7-e78e-400a-b92b-aa690d360c6e" (UID: "c41a76f7-e78e-400a-b92b-aa690d360c6e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.354369 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data" (OuterVolumeSpecName: "config-data") pod "64d0c766-199e-40bf-b21c-ed64d433a17d" (UID: "64d0c766-199e-40bf-b21c-ed64d433a17d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.375233 4910 scope.go:117] "RemoveContainer" containerID="d4318395ca9aadf5be619088bb9cd59e9aad0db0e9c99d426e02524bda1dd46c" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.385309 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.385332 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.385342 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d0c766-199e-40bf-b21c-ed64d433a17d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.385353 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41a76f7-e78e-400a-b92b-aa690d360c6e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.401274 4910 scope.go:117] "RemoveContainer" containerID="63e2e63d39a768c2cbe7e77b301ced6d207a5bae3f4b822642320a39497bbe94" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.580751 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-nwmcv"] Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581097 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerName="neutron-httpd" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581114 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerName="neutron-httpd" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581129 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerName="barbican-api" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581135 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerName="barbican-api" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581143 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerName="neutron-api" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581149 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerName="neutron-api" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581247 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41a76f7-e78e-400a-b92b-aa690d360c6e" containerName="dnsmasq-dns" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581254 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41a76f7-e78e-400a-b92b-aa690d360c6e" containerName="dnsmasq-dns" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581261 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerName="barbican-keystone-listener" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581268 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerName="barbican-keystone-listener" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581280 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41a76f7-e78e-400a-b92b-aa690d360c6e" containerName="init" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581286 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41a76f7-e78e-400a-b92b-aa690d360c6e" containerName="init" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581296 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42767c30-5b6b-4df6-9237-962c97165901" containerName="barbican-worker" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581302 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="42767c30-5b6b-4df6-9237-962c97165901" containerName="barbican-worker" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581316 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865f4842-373e-4bc9-98cd-4ceabb03b9f9" containerName="cloudkitty-db-sync" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581321 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="865f4842-373e-4bc9-98cd-4ceabb03b9f9" containerName="cloudkitty-db-sync" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581335 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerName="barbican-api-log" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581342 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerName="barbican-api-log" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581384 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerName="barbican-keystone-listener-log" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581392 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerName="barbican-keystone-listener-log" Feb 26 22:17:58 crc kubenswrapper[4910]: E0226 22:17:58.581407 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42767c30-5b6b-4df6-9237-962c97165901" containerName="barbican-worker-log" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581413 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="42767c30-5b6b-4df6-9237-962c97165901" containerName="barbican-worker-log" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581639 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerName="barbican-keystone-listener" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581652 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41a76f7-e78e-400a-b92b-aa690d360c6e" containerName="dnsmasq-dns" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581662 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerName="barbican-api" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581688 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d0c766-199e-40bf-b21c-ed64d433a17d" containerName="barbican-keystone-listener-log" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581698 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="865f4842-373e-4bc9-98cd-4ceabb03b9f9" containerName="cloudkitty-db-sync" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581706 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="42767c30-5b6b-4df6-9237-962c97165901" containerName="barbican-worker-log" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581714 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="42767c30-5b6b-4df6-9237-962c97165901" containerName="barbican-worker" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581729 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae6f131-7b0b-4146-a1ed-640ad6302dca" containerName="barbican-api-log" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581738 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerName="neutron-httpd" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.581763 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="7088af71-8215-43a1-b8e9-a23d8ff28d96" containerName="neutron-api" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.582858 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.586440 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-2pm9n" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.586973 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.588405 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.588732 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.588897 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.619798 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-nwmcv"] Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.674228 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mwkqc"] Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.686822 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mwkqc"] Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.690261 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-certs\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.690319 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c54zk\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-kube-api-access-c54zk\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.690401 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-combined-ca-bundle\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.690485 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-config-data\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.690516 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-scripts\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.694965 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5dd849cb94-qw888"] Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.701684 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5dd849cb94-qw888"] Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.792339 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-combined-ca-bundle\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.792861 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-config-data\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.792981 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-scripts\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.793091 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-certs\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.793203 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c54zk\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-kube-api-access-c54zk\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.802865 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-scripts\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.810331 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-config-data\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.812013 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-certs\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.820771 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c54zk\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-kube-api-access-c54zk\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.825817 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-combined-ca-bundle\") pod \"cloudkitty-storageinit-nwmcv\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:58 crc kubenswrapper[4910]: I0226 22:17:58.967608 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.084030 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.301056 4910 generic.go:334] "Generic (PLEG): container finished" podID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerID="86ba7cfa670f936b2c22be3638236e7de871d9a239017e74755304996f2a06cd" exitCode=2 Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.301485 4910 generic.go:334] "Generic (PLEG): container finished" podID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerID="03e2c6f282e58c8aadcfe18c580736f090b601212f9feb71ba11569170b03144" exitCode=0 Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.301229 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerDied","Data":"86ba7cfa670f936b2c22be3638236e7de871d9a239017e74755304996f2a06cd"} Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.301630 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerDied","Data":"03e2c6f282e58c8aadcfe18c580736f090b601212f9feb71ba11569170b03144"} Feb 26 22:17:59 crc kubenswrapper[4910]: W0226 22:17:59.501482 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8bef8b9_b550_4583_919c_cf05871439ad.slice/crio-41f38cd58a2819fe0dd4a7d51effe2344ae9c9705668cc27d79ec55d3c67a7a3 WatchSource:0}: Error finding container 41f38cd58a2819fe0dd4a7d51effe2344ae9c9705668cc27d79ec55d3c67a7a3: Status 404 returned error can't find the container with id 41f38cd58a2819fe0dd4a7d51effe2344ae9c9705668cc27d79ec55d3c67a7a3 Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.507818 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-nwmcv"] Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.538051 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.836086 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-68496f8578-p9hfm" Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.920150 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42767c30-5b6b-4df6-9237-962c97165901" path="/var/lib/kubelet/pods/42767c30-5b6b-4df6-9237-962c97165901/volumes" Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.922282 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d0c766-199e-40bf-b21c-ed64d433a17d" path="/var/lib/kubelet/pods/64d0c766-199e-40bf-b21c-ed64d433a17d/volumes" Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.923109 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41a76f7-e78e-400a-b92b-aa690d360c6e" path="/var/lib/kubelet/pods/c41a76f7-e78e-400a-b92b-aa690d360c6e/volumes" Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.927101 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f66df7474-bqlkd"] Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.927410 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f66df7474-bqlkd" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api-log" containerID="cri-o://588e7915757fa0a7f11fefcbff6a175b3201041fb8ec3acaac11fbc2bf555c80" gracePeriod=30 Feb 26 22:17:59 crc kubenswrapper[4910]: I0226 22:17:59.927578 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f66df7474-bqlkd" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api" containerID="cri-o://ae32a9ba55280c8fb8a0963be3e71d0adedd1c612f3b9db2f7265ee90c42c365" gracePeriod=30 Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.137217 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535738-725f8"] Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.138832 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535738-725f8" Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.142806 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.143430 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.143501 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.145709 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535738-725f8"] Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.226522 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fblt\" (UniqueName: \"kubernetes.io/projected/427eef92-e6a1-48a4-99e1-98a78c269555-kube-api-access-8fblt\") pod \"auto-csr-approver-29535738-725f8\" (UID: \"427eef92-e6a1-48a4-99e1-98a78c269555\") " pod="openshift-infra/auto-csr-approver-29535738-725f8" Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.313455 4910 generic.go:334] "Generic (PLEG): container finished" podID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerID="588e7915757fa0a7f11fefcbff6a175b3201041fb8ec3acaac11fbc2bf555c80" exitCode=143 Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.313693 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f66df7474-bqlkd" event={"ID":"25964a73-e44b-492b-9c2d-35c2ae2e935a","Type":"ContainerDied","Data":"588e7915757fa0a7f11fefcbff6a175b3201041fb8ec3acaac11fbc2bf555c80"} Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.316309 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-nwmcv" event={"ID":"b8bef8b9-b550-4583-919c-cf05871439ad","Type":"ContainerStarted","Data":"fbfb5c31dbc628fe128d7b1dc0d8026139c0d7c3f1dbb5f9339215a8a3f15e0f"} Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.316375 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-nwmcv" event={"ID":"b8bef8b9-b550-4583-919c-cf05871439ad","Type":"ContainerStarted","Data":"41f38cd58a2819fe0dd4a7d51effe2344ae9c9705668cc27d79ec55d3c67a7a3"} Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.328614 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fblt\" (UniqueName: \"kubernetes.io/projected/427eef92-e6a1-48a4-99e1-98a78c269555-kube-api-access-8fblt\") pod \"auto-csr-approver-29535738-725f8\" (UID: \"427eef92-e6a1-48a4-99e1-98a78c269555\") " pod="openshift-infra/auto-csr-approver-29535738-725f8" Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.344271 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-nwmcv" podStartSLOduration=2.344251984 podStartE2EDuration="2.344251984s" podCreationTimestamp="2026-02-26 22:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:00.332114883 +0000 UTC m=+1365.411605424" watchObservedRunningTime="2026-02-26 22:18:00.344251984 +0000 UTC m=+1365.423742525" Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.352780 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fblt\" (UniqueName: \"kubernetes.io/projected/427eef92-e6a1-48a4-99e1-98a78c269555-kube-api-access-8fblt\") pod \"auto-csr-approver-29535738-725f8\" (UID: \"427eef92-e6a1-48a4-99e1-98a78c269555\") " pod="openshift-infra/auto-csr-approver-29535738-725f8" Feb 26 22:18:00 crc kubenswrapper[4910]: I0226 22:18:00.470076 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535738-725f8" Feb 26 22:18:01 crc kubenswrapper[4910]: I0226 22:18:01.177071 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535738-725f8"] Feb 26 22:18:01 crc kubenswrapper[4910]: W0226 22:18:01.180419 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod427eef92_e6a1_48a4_99e1_98a78c269555.slice/crio-7384ea8e5ff6a2a18c7f7a53dcc2231642c253bf815eb86a2e69cfe1689fd1f3 WatchSource:0}: Error finding container 7384ea8e5ff6a2a18c7f7a53dcc2231642c253bf815eb86a2e69cfe1689fd1f3: Status 404 returned error can't find the container with id 7384ea8e5ff6a2a18c7f7a53dcc2231642c253bf815eb86a2e69cfe1689fd1f3 Feb 26 22:18:01 crc kubenswrapper[4910]: I0226 22:18:01.332534 4910 generic.go:334] "Generic (PLEG): container finished" podID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerID="ee50a00d864892d417a029125914b9b85eb57efea05f62fda9f836ad89dd1c9a" exitCode=0 Feb 26 22:18:01 crc kubenswrapper[4910]: I0226 22:18:01.332603 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerDied","Data":"ee50a00d864892d417a029125914b9b85eb57efea05f62fda9f836ad89dd1c9a"} Feb 26 22:18:01 crc kubenswrapper[4910]: I0226 22:18:01.335103 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535738-725f8" event={"ID":"427eef92-e6a1-48a4-99e1-98a78c269555","Type":"ContainerStarted","Data":"7384ea8e5ff6a2a18c7f7a53dcc2231642c253bf815eb86a2e69cfe1689fd1f3"} Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.089585 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.177257 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-combined-ca-bundle\") pod \"ae349f4d-1586-4ba1-9b81-e84503327e71\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.177658 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data-custom\") pod \"ae349f4d-1586-4ba1-9b81-e84503327e71\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.177688 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae349f4d-1586-4ba1-9b81-e84503327e71-etc-machine-id\") pod \"ae349f4d-1586-4ba1-9b81-e84503327e71\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.177709 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22clw\" (UniqueName: \"kubernetes.io/projected/ae349f4d-1586-4ba1-9b81-e84503327e71-kube-api-access-22clw\") pod \"ae349f4d-1586-4ba1-9b81-e84503327e71\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.177755 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-scripts\") pod \"ae349f4d-1586-4ba1-9b81-e84503327e71\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.177762 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae349f4d-1586-4ba1-9b81-e84503327e71-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ae349f4d-1586-4ba1-9b81-e84503327e71" (UID: "ae349f4d-1586-4ba1-9b81-e84503327e71"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.177777 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data\") pod \"ae349f4d-1586-4ba1-9b81-e84503327e71\" (UID: \"ae349f4d-1586-4ba1-9b81-e84503327e71\") " Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.178673 4910 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae349f4d-1586-4ba1-9b81-e84503327e71-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.182760 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae349f4d-1586-4ba1-9b81-e84503327e71" (UID: "ae349f4d-1586-4ba1-9b81-e84503327e71"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.183512 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-scripts" (OuterVolumeSpecName: "scripts") pod "ae349f4d-1586-4ba1-9b81-e84503327e71" (UID: "ae349f4d-1586-4ba1-9b81-e84503327e71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.196331 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae349f4d-1586-4ba1-9b81-e84503327e71-kube-api-access-22clw" (OuterVolumeSpecName: "kube-api-access-22clw") pod "ae349f4d-1586-4ba1-9b81-e84503327e71" (UID: "ae349f4d-1586-4ba1-9b81-e84503327e71"). InnerVolumeSpecName "kube-api-access-22clw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.258957 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae349f4d-1586-4ba1-9b81-e84503327e71" (UID: "ae349f4d-1586-4ba1-9b81-e84503327e71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.280940 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.280972 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22clw\" (UniqueName: \"kubernetes.io/projected/ae349f4d-1586-4ba1-9b81-e84503327e71-kube-api-access-22clw\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.280982 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.280991 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.320396 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data" (OuterVolumeSpecName: "config-data") pod "ae349f4d-1586-4ba1-9b81-e84503327e71" (UID: "ae349f4d-1586-4ba1-9b81-e84503327e71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.359881 4910 generic.go:334] "Generic (PLEG): container finished" podID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerID="955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758" exitCode=0 Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.359958 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.359949 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae349f4d-1586-4ba1-9b81-e84503327e71","Type":"ContainerDied","Data":"955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758"} Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.360109 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae349f4d-1586-4ba1-9b81-e84503327e71","Type":"ContainerDied","Data":"ef9521bbc499bf6d47cfa6f62ba9207ece01a8b3a300466dbe47404855f093cd"} Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.360146 4910 scope.go:117] "RemoveContainer" containerID="88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.362392 4910 generic.go:334] "Generic (PLEG): container finished" podID="b8bef8b9-b550-4583-919c-cf05871439ad" containerID="fbfb5c31dbc628fe128d7b1dc0d8026139c0d7c3f1dbb5f9339215a8a3f15e0f" exitCode=0 Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.362429 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-nwmcv" event={"ID":"b8bef8b9-b550-4583-919c-cf05871439ad","Type":"ContainerDied","Data":"fbfb5c31dbc628fe128d7b1dc0d8026139c0d7c3f1dbb5f9339215a8a3f15e0f"} Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.383188 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae349f4d-1586-4ba1-9b81-e84503327e71-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.413650 4910 scope.go:117] "RemoveContainer" containerID="955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.418266 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.435473 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.444385 4910 scope.go:117] "RemoveContainer" containerID="88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44" Feb 26 22:18:02 crc kubenswrapper[4910]: E0226 22:18:02.444934 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44\": container with ID starting with 88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44 not found: ID does not exist" containerID="88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.444980 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44"} err="failed to get container status \"88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44\": rpc error: code = NotFound desc = could not find container \"88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44\": container with ID starting with 88413cbb5a208e837107c464b6f74c98e1ce29bfcf603c3237428c06442d7c44 not found: ID does not exist" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.445004 4910 scope.go:117] "RemoveContainer" containerID="955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758" Feb 26 22:18:02 crc kubenswrapper[4910]: E0226 22:18:02.445483 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758\": container with ID starting with 955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758 not found: ID does not exist" containerID="955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.445508 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758"} err="failed to get container status \"955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758\": rpc error: code = NotFound desc = could not find container \"955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758\": container with ID starting with 955c7db0d15f254a3b835b605aeb75bad069eb7d6aabe76e51ad0ab90bce2758 not found: ID does not exist" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.446014 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 22:18:02 crc kubenswrapper[4910]: E0226 22:18:02.446530 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerName="cinder-scheduler" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.446552 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerName="cinder-scheduler" Feb 26 22:18:02 crc kubenswrapper[4910]: E0226 22:18:02.446577 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerName="probe" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.446586 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerName="probe" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.446817 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerName="probe" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.446855 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae349f4d-1586-4ba1-9b81-e84503327e71" containerName="cinder-scheduler" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.448033 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.450268 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.456330 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.598777 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncwfv\" (UniqueName: \"kubernetes.io/projected/9d19a84d-6bcc-4eac-b319-cfcad44d541b-kube-api-access-ncwfv\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.599049 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.599131 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.599255 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d19a84d-6bcc-4eac-b319-cfcad44d541b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.599528 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.599673 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.701782 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.701855 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.701894 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwfv\" (UniqueName: \"kubernetes.io/projected/9d19a84d-6bcc-4eac-b319-cfcad44d541b-kube-api-access-ncwfv\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.701950 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.701971 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.702002 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d19a84d-6bcc-4eac-b319-cfcad44d541b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.702115 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d19a84d-6bcc-4eac-b319-cfcad44d541b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.706490 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.706841 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.707046 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.710050 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d19a84d-6bcc-4eac-b319-cfcad44d541b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.723418 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncwfv\" (UniqueName: \"kubernetes.io/projected/9d19a84d-6bcc-4eac-b319-cfcad44d541b-kube-api-access-ncwfv\") pod \"cinder-scheduler-0\" (UID: \"9d19a84d-6bcc-4eac-b319-cfcad44d541b\") " pod="openstack/cinder-scheduler-0" Feb 26 22:18:02 crc kubenswrapper[4910]: I0226 22:18:02.788989 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.103796 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f66df7474-bqlkd" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.191:9311/healthcheck\": read tcp 10.217.0.2:60538->10.217.0.191:9311: read: connection reset by peer" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.103862 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f66df7474-bqlkd" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.191:9311/healthcheck\": read tcp 10.217.0.2:60540->10.217.0.191:9311: read: connection reset by peer" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.273094 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.392410 4910 generic.go:334] "Generic (PLEG): container finished" podID="427eef92-e6a1-48a4-99e1-98a78c269555" containerID="0430a4d83fcac8ffb1e4618fe687cba42e610f121aaea2f823b9ccda1a479579" exitCode=0 Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.392475 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535738-725f8" event={"ID":"427eef92-e6a1-48a4-99e1-98a78c269555","Type":"ContainerDied","Data":"0430a4d83fcac8ffb1e4618fe687cba42e610f121aaea2f823b9ccda1a479579"} Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.399267 4910 generic.go:334] "Generic (PLEG): container finished" podID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerID="ae32a9ba55280c8fb8a0963be3e71d0adedd1c612f3b9db2f7265ee90c42c365" exitCode=0 Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.399322 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f66df7474-bqlkd" event={"ID":"25964a73-e44b-492b-9c2d-35c2ae2e935a","Type":"ContainerDied","Data":"ae32a9ba55280c8fb8a0963be3e71d0adedd1c612f3b9db2f7265ee90c42c365"} Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.400627 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d19a84d-6bcc-4eac-b319-cfcad44d541b","Type":"ContainerStarted","Data":"ef80aca8f414460dfbf747afb67df19d65c6f49015f8084eb7599968248352dc"} Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.476333 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.621091 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data\") pod \"25964a73-e44b-492b-9c2d-35c2ae2e935a\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.621206 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd7bs\" (UniqueName: \"kubernetes.io/projected/25964a73-e44b-492b-9c2d-35c2ae2e935a-kube-api-access-gd7bs\") pod \"25964a73-e44b-492b-9c2d-35c2ae2e935a\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.621458 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-combined-ca-bundle\") pod \"25964a73-e44b-492b-9c2d-35c2ae2e935a\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.621514 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25964a73-e44b-492b-9c2d-35c2ae2e935a-logs\") pod \"25964a73-e44b-492b-9c2d-35c2ae2e935a\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.621567 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data-custom\") pod \"25964a73-e44b-492b-9c2d-35c2ae2e935a\" (UID: \"25964a73-e44b-492b-9c2d-35c2ae2e935a\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.622765 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25964a73-e44b-492b-9c2d-35c2ae2e935a-logs" (OuterVolumeSpecName: "logs") pod "25964a73-e44b-492b-9c2d-35c2ae2e935a" (UID: "25964a73-e44b-492b-9c2d-35c2ae2e935a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.633056 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25964a73-e44b-492b-9c2d-35c2ae2e935a-kube-api-access-gd7bs" (OuterVolumeSpecName: "kube-api-access-gd7bs") pod "25964a73-e44b-492b-9c2d-35c2ae2e935a" (UID: "25964a73-e44b-492b-9c2d-35c2ae2e935a"). InnerVolumeSpecName "kube-api-access-gd7bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.637568 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "25964a73-e44b-492b-9c2d-35c2ae2e935a" (UID: "25964a73-e44b-492b-9c2d-35c2ae2e935a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.678747 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data" (OuterVolumeSpecName: "config-data") pod "25964a73-e44b-492b-9c2d-35c2ae2e935a" (UID: "25964a73-e44b-492b-9c2d-35c2ae2e935a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.684253 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25964a73-e44b-492b-9c2d-35c2ae2e935a" (UID: "25964a73-e44b-492b-9c2d-35c2ae2e935a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.724258 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.724294 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.724303 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd7bs\" (UniqueName: \"kubernetes.io/projected/25964a73-e44b-492b-9c2d-35c2ae2e935a-kube-api-access-gd7bs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.724313 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25964a73-e44b-492b-9c2d-35c2ae2e935a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.724323 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25964a73-e44b-492b-9c2d-35c2ae2e935a-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.820108 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.927764 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-combined-ca-bundle\") pod \"b8bef8b9-b550-4583-919c-cf05871439ad\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.927845 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-certs\") pod \"b8bef8b9-b550-4583-919c-cf05871439ad\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.927884 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c54zk\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-kube-api-access-c54zk\") pod \"b8bef8b9-b550-4583-919c-cf05871439ad\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.927918 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-config-data\") pod \"b8bef8b9-b550-4583-919c-cf05871439ad\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.928091 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-scripts\") pod \"b8bef8b9-b550-4583-919c-cf05871439ad\" (UID: \"b8bef8b9-b550-4583-919c-cf05871439ad\") " Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.936277 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae349f4d-1586-4ba1-9b81-e84503327e71" path="/var/lib/kubelet/pods/ae349f4d-1586-4ba1-9b81-e84503327e71/volumes" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.943725 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-kube-api-access-c54zk" (OuterVolumeSpecName: "kube-api-access-c54zk") pod "b8bef8b9-b550-4583-919c-cf05871439ad" (UID: "b8bef8b9-b550-4583-919c-cf05871439ad"). InnerVolumeSpecName "kube-api-access-c54zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.952021 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-certs" (OuterVolumeSpecName: "certs") pod "b8bef8b9-b550-4583-919c-cf05871439ad" (UID: "b8bef8b9-b550-4583-919c-cf05871439ad"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:03 crc kubenswrapper[4910]: I0226 22:18:03.998550 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-scripts" (OuterVolumeSpecName: "scripts") pod "b8bef8b9-b550-4583-919c-cf05871439ad" (UID: "b8bef8b9-b550-4583-919c-cf05871439ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.041326 4910 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.041356 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c54zk\" (UniqueName: \"kubernetes.io/projected/b8bef8b9-b550-4583-919c-cf05871439ad-kube-api-access-c54zk\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.041367 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.051636 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-config-data" (OuterVolumeSpecName: "config-data") pod "b8bef8b9-b550-4583-919c-cf05871439ad" (UID: "b8bef8b9-b550-4583-919c-cf05871439ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.055052 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8bef8b9-b550-4583-919c-cf05871439ad" (UID: "b8bef8b9-b550-4583-919c-cf05871439ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.144412 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.144451 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bef8b9-b550-4583-919c-cf05871439ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.414986 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-nwmcv" event={"ID":"b8bef8b9-b550-4583-919c-cf05871439ad","Type":"ContainerDied","Data":"41f38cd58a2819fe0dd4a7d51effe2344ae9c9705668cc27d79ec55d3c67a7a3"} Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.415285 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f38cd58a2819fe0dd4a7d51effe2344ae9c9705668cc27d79ec55d3c67a7a3" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.415376 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-nwmcv" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.421476 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f66df7474-bqlkd" event={"ID":"25964a73-e44b-492b-9c2d-35c2ae2e935a","Type":"ContainerDied","Data":"5e3af63cae2fc82bfda54dab68ec436c35fe3fee7e0eee479db8ccbde8258e06"} Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.421516 4910 scope.go:117] "RemoveContainer" containerID="ae32a9ba55280c8fb8a0963be3e71d0adedd1c612f3b9db2f7265ee90c42c365" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.421612 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f66df7474-bqlkd" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.423971 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d19a84d-6bcc-4eac-b319-cfcad44d541b","Type":"ContainerStarted","Data":"29a472aadfddf54583db5855412a4429ada9d969377d27ea7949c0c3490f9d84"} Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.458314 4910 scope.go:117] "RemoveContainer" containerID="588e7915757fa0a7f11fefcbff6a175b3201041fb8ec3acaac11fbc2bf555c80" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.460720 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f66df7474-bqlkd"] Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.472876 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6f66df7474-bqlkd"] Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.642039 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:18:04 crc kubenswrapper[4910]: E0226 22:18:04.642496 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bef8b9-b550-4583-919c-cf05871439ad" containerName="cloudkitty-storageinit" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.642514 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bef8b9-b550-4583-919c-cf05871439ad" containerName="cloudkitty-storageinit" Feb 26 22:18:04 crc kubenswrapper[4910]: E0226 22:18:04.642527 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api-log" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.642534 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api-log" Feb 26 22:18:04 crc kubenswrapper[4910]: E0226 22:18:04.642543 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.642549 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.642730 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api-log" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.642750 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bef8b9-b550-4583-919c-cf05871439ad" containerName="cloudkitty-storageinit" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.642761 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" containerName="barbican-api" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.643443 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.648809 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.648885 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.649038 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.649097 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.649185 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-2pm9n" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.664515 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.719661 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-jlw2k"] Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.722076 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.767406 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-jlw2k"] Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.768565 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.768627 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-scripts\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.768663 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-certs\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.768694 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2fr\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-kube-api-access-5b2fr\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.768748 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.768778 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873115 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b2fr\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-kube-api-access-5b2fr\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873204 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873253 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873279 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873316 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873352 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873373 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873393 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-config\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873417 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-svc\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873446 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-scripts\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873468 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv8ls\" (UniqueName: \"kubernetes.io/projected/309e28ac-a722-4ea7-98e1-80c3dec84033-kube-api-access-hv8ls\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.873487 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-certs\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.877226 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.878954 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.880450 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.883451 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.888171 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.897130 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.897325 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-certs\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.897997 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.919150 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-scripts\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.919485 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b2fr\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-kube-api-access-5b2fr\") pod \"cloudkitty-proc-0\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975322 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-certs\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975414 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975453 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975471 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrfh\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-kube-api-access-pqrfh\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975548 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975573 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-config\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975595 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-svc\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975631 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-scripts\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975651 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv8ls\" (UniqueName: \"kubernetes.io/projected/309e28ac-a722-4ea7-98e1-80c3dec84033-kube-api-access-hv8ls\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975675 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0679eca7-08b8-427b-af08-35d2bcdf742a-logs\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975690 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975734 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.975753 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.976622 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.976920 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-svc\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.977438 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.977516 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:04 crc kubenswrapper[4910]: I0226 22:18:04.977692 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-config\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.000892 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv8ls\" (UniqueName: \"kubernetes.io/projected/309e28ac-a722-4ea7-98e1-80c3dec84033-kube-api-access-hv8ls\") pod \"dnsmasq-dns-58bd69657f-jlw2k\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.005584 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.077674 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.077959 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-certs\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.078077 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.078119 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqrfh\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-kube-api-access-pqrfh\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.078228 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-scripts\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.078264 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0679eca7-08b8-427b-af08-35d2bcdf742a-logs\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.078311 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.080363 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0679eca7-08b8-427b-af08-35d2bcdf742a-logs\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.083497 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.084407 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-scripts\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.088368 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.089175 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-certs\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.096371 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.097569 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqrfh\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-kube-api-access-pqrfh\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.100850 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data\") pod \"cloudkitty-api-0\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.207007 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.258041 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535738-725f8" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.393386 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fblt\" (UniqueName: \"kubernetes.io/projected/427eef92-e6a1-48a4-99e1-98a78c269555-kube-api-access-8fblt\") pod \"427eef92-e6a1-48a4-99e1-98a78c269555\" (UID: \"427eef92-e6a1-48a4-99e1-98a78c269555\") " Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.430001 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/427eef92-e6a1-48a4-99e1-98a78c269555-kube-api-access-8fblt" (OuterVolumeSpecName: "kube-api-access-8fblt") pod "427eef92-e6a1-48a4-99e1-98a78c269555" (UID: "427eef92-e6a1-48a4-99e1-98a78c269555"). InnerVolumeSpecName "kube-api-access-8fblt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.451677 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535738-725f8" event={"ID":"427eef92-e6a1-48a4-99e1-98a78c269555","Type":"ContainerDied","Data":"7384ea8e5ff6a2a18c7f7a53dcc2231642c253bf815eb86a2e69cfe1689fd1f3"} Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.451713 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7384ea8e5ff6a2a18c7f7a53dcc2231642c253bf815eb86a2e69cfe1689fd1f3" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.451767 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535738-725f8" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.480648 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d19a84d-6bcc-4eac-b319-cfcad44d541b","Type":"ContainerStarted","Data":"050698490c3bc15135fd21b62adcb52000cacbb6d9750014428fa4a59cf6f331"} Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.497213 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fblt\" (UniqueName: \"kubernetes.io/projected/427eef92-e6a1-48a4-99e1-98a78c269555-kube-api-access-8fblt\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.518762 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.518722089 podStartE2EDuration="3.518722089s" podCreationTimestamp="2026-02-26 22:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:05.50958915 +0000 UTC m=+1370.589079691" watchObservedRunningTime="2026-02-26 22:18:05.518722089 +0000 UTC m=+1370.598212630" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.556395 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.753325 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-jlw2k"] Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.968484 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25964a73-e44b-492b-9c2d-35c2ae2e935a" path="/var/lib/kubelet/pods/25964a73-e44b-492b-9c2d-35c2ae2e935a/volumes" Feb 26 22:18:05 crc kubenswrapper[4910]: I0226 22:18:05.972147 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:18:06 crc kubenswrapper[4910]: I0226 22:18:06.429029 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535732-dh8z5"] Feb 26 22:18:06 crc kubenswrapper[4910]: I0226 22:18:06.455116 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535732-dh8z5"] Feb 26 22:18:06 crc kubenswrapper[4910]: I0226 22:18:06.499413 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead","Type":"ContainerStarted","Data":"b2cfad853205afa1bf1f04b1ad2569e53bd3fa496d81365d1260f965dc6b889a"} Feb 26 22:18:06 crc kubenswrapper[4910]: I0226 22:18:06.509222 4910 generic.go:334] "Generic (PLEG): container finished" podID="309e28ac-a722-4ea7-98e1-80c3dec84033" containerID="3fd24aea6772d68cd5c89c240237371f9b93420d881dac202acc6e5317e20b15" exitCode=0 Feb 26 22:18:06 crc kubenswrapper[4910]: I0226 22:18:06.511371 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" event={"ID":"309e28ac-a722-4ea7-98e1-80c3dec84033","Type":"ContainerDied","Data":"3fd24aea6772d68cd5c89c240237371f9b93420d881dac202acc6e5317e20b15"} Feb 26 22:18:06 crc kubenswrapper[4910]: I0226 22:18:06.511425 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" event={"ID":"309e28ac-a722-4ea7-98e1-80c3dec84033","Type":"ContainerStarted","Data":"a1331aa32dd5248ec66c66dc3f08a0bd62eeba29a17c1df4a1ee819b71e8abfe"} Feb 26 22:18:06 crc kubenswrapper[4910]: I0226 22:18:06.520681 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0679eca7-08b8-427b-af08-35d2bcdf742a","Type":"ContainerStarted","Data":"e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7"} Feb 26 22:18:06 crc kubenswrapper[4910]: I0226 22:18:06.520754 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0679eca7-08b8-427b-af08-35d2bcdf742a","Type":"ContainerStarted","Data":"ff0f913bc4477197065aee6db18329a4850d9397f57d2fa8a9a7e1736d750a60"} Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.531580 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead","Type":"ContainerStarted","Data":"0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798"} Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.534971 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" event={"ID":"309e28ac-a722-4ea7-98e1-80c3dec84033","Type":"ContainerStarted","Data":"92b49234641c728d38d0cad38dff539431254d4997763af7885ab2d9c5feb2aa"} Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.535248 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.537516 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0679eca7-08b8-427b-af08-35d2bcdf742a","Type":"ContainerStarted","Data":"2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33"} Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.537645 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.556344 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=1.914380465 podStartE2EDuration="3.556320418s" podCreationTimestamp="2026-02-26 22:18:04 +0000 UTC" firstStartedPulling="2026-02-26 22:18:05.5682839 +0000 UTC m=+1370.647774441" lastFinishedPulling="2026-02-26 22:18:07.210223853 +0000 UTC m=+1372.289714394" observedRunningTime="2026-02-26 22:18:07.545991175 +0000 UTC m=+1372.625481716" watchObservedRunningTime="2026-02-26 22:18:07.556320418 +0000 UTC m=+1372.635810949" Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.576120 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.576104297 podStartE2EDuration="3.576104297s" podCreationTimestamp="2026-02-26 22:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:07.571329887 +0000 UTC m=+1372.650820428" watchObservedRunningTime="2026-02-26 22:18:07.576104297 +0000 UTC m=+1372.655594838" Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.596921 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" podStartSLOduration=3.596902954 podStartE2EDuration="3.596902954s" podCreationTimestamp="2026-02-26 22:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:07.594316383 +0000 UTC m=+1372.673806924" watchObservedRunningTime="2026-02-26 22:18:07.596902954 +0000 UTC m=+1372.676393495" Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.664889 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.720520 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.789130 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 22:18:07 crc kubenswrapper[4910]: I0226 22:18:07.941328 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31eeefb-61ab-4fdb-a4d2-8e25a18cf655" path="/var/lib/kubelet/pods/f31eeefb-61ab-4fdb-a4d2-8e25a18cf655/volumes" Feb 26 22:18:09 crc kubenswrapper[4910]: I0226 22:18:09.555895 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" containerName="cloudkitty-proc" containerID="cri-o://0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798" gracePeriod=30 Feb 26 22:18:09 crc kubenswrapper[4910]: I0226 22:18:09.555966 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerName="cloudkitty-api-log" containerID="cri-o://e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7" gracePeriod=30 Feb 26 22:18:09 crc kubenswrapper[4910]: I0226 22:18:09.556015 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerName="cloudkitty-api" containerID="cri-o://2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33" gracePeriod=30 Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.216371 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.300328 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-scripts\") pod \"0679eca7-08b8-427b-af08-35d2bcdf742a\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.301361 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0679eca7-08b8-427b-af08-35d2bcdf742a-logs\") pod \"0679eca7-08b8-427b-af08-35d2bcdf742a\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.301613 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0679eca7-08b8-427b-af08-35d2bcdf742a-logs" (OuterVolumeSpecName: "logs") pod "0679eca7-08b8-427b-af08-35d2bcdf742a" (UID: "0679eca7-08b8-427b-af08-35d2bcdf742a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.301645 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-certs\") pod \"0679eca7-08b8-427b-af08-35d2bcdf742a\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.301665 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqrfh\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-kube-api-access-pqrfh\") pod \"0679eca7-08b8-427b-af08-35d2bcdf742a\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.301811 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-combined-ca-bundle\") pod \"0679eca7-08b8-427b-af08-35d2bcdf742a\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.301834 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data-custom\") pod \"0679eca7-08b8-427b-af08-35d2bcdf742a\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.301855 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data\") pod \"0679eca7-08b8-427b-af08-35d2bcdf742a\" (UID: \"0679eca7-08b8-427b-af08-35d2bcdf742a\") " Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.302288 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0679eca7-08b8-427b-af08-35d2bcdf742a-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.307794 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-certs" (OuterVolumeSpecName: "certs") pod "0679eca7-08b8-427b-af08-35d2bcdf742a" (UID: "0679eca7-08b8-427b-af08-35d2bcdf742a"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.307894 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-kube-api-access-pqrfh" (OuterVolumeSpecName: "kube-api-access-pqrfh") pod "0679eca7-08b8-427b-af08-35d2bcdf742a" (UID: "0679eca7-08b8-427b-af08-35d2bcdf742a"). InnerVolumeSpecName "kube-api-access-pqrfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.308504 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-scripts" (OuterVolumeSpecName: "scripts") pod "0679eca7-08b8-427b-af08-35d2bcdf742a" (UID: "0679eca7-08b8-427b-af08-35d2bcdf742a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.312895 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0679eca7-08b8-427b-af08-35d2bcdf742a" (UID: "0679eca7-08b8-427b-af08-35d2bcdf742a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.337535 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data" (OuterVolumeSpecName: "config-data") pod "0679eca7-08b8-427b-af08-35d2bcdf742a" (UID: "0679eca7-08b8-427b-af08-35d2bcdf742a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.338282 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0679eca7-08b8-427b-af08-35d2bcdf742a" (UID: "0679eca7-08b8-427b-af08-35d2bcdf742a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.404981 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.405023 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.405039 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.405054 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0679eca7-08b8-427b-af08-35d2bcdf742a-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.405067 4910 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.405082 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqrfh\" (UniqueName: \"kubernetes.io/projected/0679eca7-08b8-427b-af08-35d2bcdf742a-kube-api-access-pqrfh\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.570566 4910 generic.go:334] "Generic (PLEG): container finished" podID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerID="2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33" exitCode=0 Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.570612 4910 generic.go:334] "Generic (PLEG): container finished" podID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerID="e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7" exitCode=143 Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.570659 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0679eca7-08b8-427b-af08-35d2bcdf742a","Type":"ContainerDied","Data":"2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33"} Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.570717 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0679eca7-08b8-427b-af08-35d2bcdf742a","Type":"ContainerDied","Data":"e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7"} Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.570734 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0679eca7-08b8-427b-af08-35d2bcdf742a","Type":"ContainerDied","Data":"ff0f913bc4477197065aee6db18329a4850d9397f57d2fa8a9a7e1736d750a60"} Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.570754 4910 scope.go:117] "RemoveContainer" containerID="2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.573025 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.607527 4910 scope.go:117] "RemoveContainer" containerID="e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.624627 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.640656 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.646836 4910 scope.go:117] "RemoveContainer" containerID="2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33" Feb 26 22:18:10 crc kubenswrapper[4910]: E0226 22:18:10.647568 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33\": container with ID starting with 2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33 not found: ID does not exist" containerID="2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.647617 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33"} err="failed to get container status \"2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33\": rpc error: code = NotFound desc = could not find container \"2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33\": container with ID starting with 2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33 not found: ID does not exist" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.647652 4910 scope.go:117] "RemoveContainer" containerID="e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7" Feb 26 22:18:10 crc kubenswrapper[4910]: E0226 22:18:10.648454 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7\": container with ID starting with e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7 not found: ID does not exist" containerID="e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.648504 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7"} err="failed to get container status \"e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7\": rpc error: code = NotFound desc = could not find container \"e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7\": container with ID starting with e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7 not found: ID does not exist" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.648530 4910 scope.go:117] "RemoveContainer" containerID="2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.648962 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33"} err="failed to get container status \"2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33\": rpc error: code = NotFound desc = could not find container \"2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33\": container with ID starting with 2c298867e695ae8b1ee481182a804acfb4b7d46bff0f36477bf1c0739df7ad33 not found: ID does not exist" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.649109 4910 scope.go:117] "RemoveContainer" containerID="e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.649531 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7"} err="failed to get container status \"e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7\": rpc error: code = NotFound desc = could not find container \"e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7\": container with ID starting with e5afc811f96b7daeeaaa6c7b83987d2fcfa94668ca6622b441be0c8f054e01c7 not found: ID does not exist" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.663537 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:18:10 crc kubenswrapper[4910]: E0226 22:18:10.664246 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427eef92-e6a1-48a4-99e1-98a78c269555" containerName="oc" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.664275 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="427eef92-e6a1-48a4-99e1-98a78c269555" containerName="oc" Feb 26 22:18:10 crc kubenswrapper[4910]: E0226 22:18:10.664298 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerName="cloudkitty-api-log" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.664311 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerName="cloudkitty-api-log" Feb 26 22:18:10 crc kubenswrapper[4910]: E0226 22:18:10.664345 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerName="cloudkitty-api" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.664357 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerName="cloudkitty-api" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.664674 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerName="cloudkitty-api-log" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.664700 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="427eef92-e6a1-48a4-99e1-98a78c269555" containerName="oc" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.664745 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0679eca7-08b8-427b-af08-35d2bcdf742a" containerName="cloudkitty-api" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.666525 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.669304 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.671597 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.671837 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.679036 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.712670 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-logs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.712773 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.712807 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-scripts\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.712829 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.712855 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.712887 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-certs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.712932 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbt9s\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-kube-api-access-xbt9s\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.712956 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.713030 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.815506 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-logs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.815578 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.815609 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-scripts\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.815631 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.815957 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-logs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.815652 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.816334 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-certs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.816393 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbt9s\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-kube-api-access-xbt9s\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.816419 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.816515 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.820753 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.821028 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.821638 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.822599 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.826046 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-scripts\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.826433 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-certs\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.829073 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:10 crc kubenswrapper[4910]: I0226 22:18:10.841231 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbt9s\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-kube-api-access-xbt9s\") pod \"cloudkitty-api-0\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " pod="openstack/cloudkitty-api-0" Feb 26 22:18:11 crc kubenswrapper[4910]: I0226 22:18:11.027172 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:18:11 crc kubenswrapper[4910]: I0226 22:18:11.581364 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:18:11 crc kubenswrapper[4910]: I0226 22:18:11.917612 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0679eca7-08b8-427b-af08-35d2bcdf742a" path="/var/lib/kubelet/pods/0679eca7-08b8-427b-af08-35d2bcdf742a/volumes" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.220255 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.221856 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.464901 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c5fdc687b-67zqb"] Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.466563 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.520116 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c5fdc687b-67zqb"] Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.557817 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-combined-ca-bundle\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.557888 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5f9\" (UniqueName: \"kubernetes.io/projected/adaa2732-718e-416c-abbb-d344cd1fbbf2-kube-api-access-gq5f9\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.557928 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-scripts\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.557965 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-public-tls-certs\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.557990 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adaa2732-718e-416c-abbb-d344cd1fbbf2-logs\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.558019 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-internal-tls-certs\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.558070 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-config-data\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.600851 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1d7bbc64-bf64-4c75-beb1-ce50a75b3724","Type":"ContainerStarted","Data":"d65dd4846bab7638b3db1bfe7b618eefd48c2ccb4dffc59c7c3fc6677f663022"} Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.601754 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1d7bbc64-bf64-4c75-beb1-ce50a75b3724","Type":"ContainerStarted","Data":"95753abeb95bd8c5572abc6e0b17c0930c983265d1f0f20e015753595af688ae"} Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.601859 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1d7bbc64-bf64-4c75-beb1-ce50a75b3724","Type":"ContainerStarted","Data":"9ccded9a5d165e29a24c8f57e7ccc9c974019172565af74fbd3c69ae15bcd1cb"} Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.601934 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.617711 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.6176877689999998 podStartE2EDuration="2.617687769s" podCreationTimestamp="2026-02-26 22:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:12.615083518 +0000 UTC m=+1377.694574079" watchObservedRunningTime="2026-02-26 22:18:12.617687769 +0000 UTC m=+1377.697178310" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.659683 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-combined-ca-bundle\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.659980 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq5f9\" (UniqueName: \"kubernetes.io/projected/adaa2732-718e-416c-abbb-d344cd1fbbf2-kube-api-access-gq5f9\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.660104 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-scripts\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.660291 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-public-tls-certs\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.660469 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adaa2732-718e-416c-abbb-d344cd1fbbf2-logs\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.660614 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-internal-tls-certs\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.660768 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-config-data\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.660859 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adaa2732-718e-416c-abbb-d344cd1fbbf2-logs\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.664956 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-internal-tls-certs\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.665678 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-public-tls-certs\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.665720 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-combined-ca-bundle\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.665886 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-scripts\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.666424 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adaa2732-718e-416c-abbb-d344cd1fbbf2-config-data\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.675688 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq5f9\" (UniqueName: \"kubernetes.io/projected/adaa2732-718e-416c-abbb-d344cd1fbbf2-kube-api-access-gq5f9\") pod \"placement-c5fdc687b-67zqb\" (UID: \"adaa2732-718e-416c-abbb-d344cd1fbbf2\") " pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.720952 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79f8b87c99-7mvnv" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.792468 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.916525 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.918071 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.920057 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.920630 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.920786 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-c8pkx" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.938560 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.966822 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/71b02cae-fc90-4a97-967d-9a539d5ab671-openstack-config\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.966884 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gmxm\" (UniqueName: \"kubernetes.io/projected/71b02cae-fc90-4a97-967d-9a539d5ab671-kube-api-access-2gmxm\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.966927 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b02cae-fc90-4a97-967d-9a539d5ab671-combined-ca-bundle\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:12 crc kubenswrapper[4910]: I0226 22:18:12.967072 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/71b02cae-fc90-4a97-967d-9a539d5ab671-openstack-config-secret\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.062915 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.068731 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/71b02cae-fc90-4a97-967d-9a539d5ab671-openstack-config\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.068798 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gmxm\" (UniqueName: \"kubernetes.io/projected/71b02cae-fc90-4a97-967d-9a539d5ab671-kube-api-access-2gmxm\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.068862 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b02cae-fc90-4a97-967d-9a539d5ab671-combined-ca-bundle\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.068914 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/71b02cae-fc90-4a97-967d-9a539d5ab671-openstack-config-secret\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.070703 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/71b02cae-fc90-4a97-967d-9a539d5ab671-openstack-config\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.084124 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b02cae-fc90-4a97-967d-9a539d5ab671-combined-ca-bundle\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.094612 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gmxm\" (UniqueName: \"kubernetes.io/projected/71b02cae-fc90-4a97-967d-9a539d5ab671-kube-api-access-2gmxm\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.110323 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/71b02cae-fc90-4a97-967d-9a539d5ab671-openstack-config-secret\") pod \"openstackclient\" (UID: \"71b02cae-fc90-4a97-967d-9a539d5ab671\") " pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.241264 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.433757 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c5fdc687b-67zqb"] Feb 26 22:18:13 crc kubenswrapper[4910]: W0226 22:18:13.439947 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadaa2732_718e_416c_abbb_d344cd1fbbf2.slice/crio-a49c634016c4cb044dd0a3cf80f121fe5dca63b1bc7a300bb178cc985f0bd755 WatchSource:0}: Error finding container a49c634016c4cb044dd0a3cf80f121fe5dca63b1bc7a300bb178cc985f0bd755: Status 404 returned error can't find the container with id a49c634016c4cb044dd0a3cf80f121fe5dca63b1bc7a300bb178cc985f0bd755 Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.609071 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5fdc687b-67zqb" event={"ID":"adaa2732-718e-416c-abbb-d344cd1fbbf2","Type":"ContainerStarted","Data":"a49c634016c4cb044dd0a3cf80f121fe5dca63b1bc7a300bb178cc985f0bd755"} Feb 26 22:18:13 crc kubenswrapper[4910]: I0226 22:18:13.768770 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 22:18:13 crc kubenswrapper[4910]: W0226 22:18:13.786211 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71b02cae_fc90_4a97_967d_9a539d5ab671.slice/crio-fb592f25be32082673714d135a0238c3d10ba4bda668048fab501c165747f3d9 WatchSource:0}: Error finding container fb592f25be32082673714d135a0238c3d10ba4bda668048fab501c165747f3d9: Status 404 returned error can't find the container with id fb592f25be32082673714d135a0238c3d10ba4bda668048fab501c165747f3d9 Feb 26 22:18:14 crc kubenswrapper[4910]: I0226 22:18:14.620531 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"71b02cae-fc90-4a97-967d-9a539d5ab671","Type":"ContainerStarted","Data":"fb592f25be32082673714d135a0238c3d10ba4bda668048fab501c165747f3d9"} Feb 26 22:18:14 crc kubenswrapper[4910]: I0226 22:18:14.622194 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5fdc687b-67zqb" event={"ID":"adaa2732-718e-416c-abbb-d344cd1fbbf2","Type":"ContainerStarted","Data":"dc4705a2a1a6d86ec89327a5bea11351a0530efacb11c5a7fa05d8274c0275ac"} Feb 26 22:18:14 crc kubenswrapper[4910]: I0226 22:18:14.622216 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5fdc687b-67zqb" event={"ID":"adaa2732-718e-416c-abbb-d344cd1fbbf2","Type":"ContainerStarted","Data":"7c850bd09c0858034f7c477974300fb3cbadb8bab6961d5703438dcda24b2eaf"} Feb 26 22:18:14 crc kubenswrapper[4910]: I0226 22:18:14.622417 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:14 crc kubenswrapper[4910]: I0226 22:18:14.646728 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c5fdc687b-67zqb" podStartSLOduration=2.646708513 podStartE2EDuration="2.646708513s" podCreationTimestamp="2026-02-26 22:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:14.641542532 +0000 UTC m=+1379.721033073" watchObservedRunningTime="2026-02-26 22:18:14.646708513 +0000 UTC m=+1379.726199064" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.098281 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.161147 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9b8rx"] Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.161662 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" podUID="d8f9c22d-356d-4c49-bc7f-054f480770ec" containerName="dnsmasq-dns" containerID="cri-o://9f86891f1b4382c0250b50f5bd259177d00860c59e31f57ee763293c4d35c44c" gracePeriod=10 Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.539896 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.628733 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-scripts\") pod \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.628799 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data\") pod \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.628835 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-combined-ca-bundle\") pod \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.628927 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data-custom\") pod \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.642300 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" (UID: "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.658268 4910 generic.go:334] "Generic (PLEG): container finished" podID="fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" containerID="0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798" exitCode=0 Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.658706 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.659262 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead","Type":"ContainerDied","Data":"0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798"} Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.659325 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead","Type":"ContainerDied","Data":"b2cfad853205afa1bf1f04b1ad2569e53bd3fa496d81365d1260f965dc6b889a"} Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.659344 4910 scope.go:117] "RemoveContainer" containerID="0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.662591 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-scripts" (OuterVolumeSpecName: "scripts") pod "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" (UID: "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.670009 4910 generic.go:334] "Generic (PLEG): container finished" podID="d8f9c22d-356d-4c49-bc7f-054f480770ec" containerID="9f86891f1b4382c0250b50f5bd259177d00860c59e31f57ee763293c4d35c44c" exitCode=0 Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.670957 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" event={"ID":"d8f9c22d-356d-4c49-bc7f-054f480770ec","Type":"ContainerDied","Data":"9f86891f1b4382c0250b50f5bd259177d00860c59e31f57ee763293c4d35c44c"} Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.671000 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.678649 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data" (OuterVolumeSpecName: "config-data") pod "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" (UID: "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.691725 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.718846 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" (UID: "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.733447 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-certs\") pod \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.733529 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b2fr\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-kube-api-access-5b2fr\") pod \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\" (UID: \"fdfbe459-2ae5-4d85-9d94-7aeb0c845ead\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.736755 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-kube-api-access-5b2fr" (OuterVolumeSpecName: "kube-api-access-5b2fr") pod "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" (UID: "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead"). InnerVolumeSpecName "kube-api-access-5b2fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.737979 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-certs" (OuterVolumeSpecName: "certs") pod "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" (UID: "fdfbe459-2ae5-4d85-9d94-7aeb0c845ead"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.741746 4910 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.741794 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b2fr\" (UniqueName: \"kubernetes.io/projected/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-kube-api-access-5b2fr\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.741833 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.741851 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.741894 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.741965 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.759859 4910 scope.go:117] "RemoveContainer" containerID="0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798" Feb 26 22:18:15 crc kubenswrapper[4910]: E0226 22:18:15.760576 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798\": container with ID starting with 0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798 not found: ID does not exist" containerID="0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.760626 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798"} err="failed to get container status \"0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798\": rpc error: code = NotFound desc = could not find container \"0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798\": container with ID starting with 0c2328084ab92d9874c796673d3e39b70b38e67dafe235901c43b010b1129798 not found: ID does not exist" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.851145 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkclf\" (UniqueName: \"kubernetes.io/projected/d8f9c22d-356d-4c49-bc7f-054f480770ec-kube-api-access-gkclf\") pod \"d8f9c22d-356d-4c49-bc7f-054f480770ec\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.851202 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-sb\") pod \"d8f9c22d-356d-4c49-bc7f-054f480770ec\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.851244 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-svc\") pod \"d8f9c22d-356d-4c49-bc7f-054f480770ec\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.851328 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-nb\") pod \"d8f9c22d-356d-4c49-bc7f-054f480770ec\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.851381 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-config\") pod \"d8f9c22d-356d-4c49-bc7f-054f480770ec\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.851395 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-swift-storage-0\") pod \"d8f9c22d-356d-4c49-bc7f-054f480770ec\" (UID: \"d8f9c22d-356d-4c49-bc7f-054f480770ec\") " Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.868112 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f9c22d-356d-4c49-bc7f-054f480770ec-kube-api-access-gkclf" (OuterVolumeSpecName: "kube-api-access-gkclf") pod "d8f9c22d-356d-4c49-bc7f-054f480770ec" (UID: "d8f9c22d-356d-4c49-bc7f-054f480770ec"). InnerVolumeSpecName "kube-api-access-gkclf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.911170 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d8f9c22d-356d-4c49-bc7f-054f480770ec" (UID: "d8f9c22d-356d-4c49-bc7f-054f480770ec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.930368 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8f9c22d-356d-4c49-bc7f-054f480770ec" (UID: "d8f9c22d-356d-4c49-bc7f-054f480770ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.940754 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-config" (OuterVolumeSpecName: "config") pod "d8f9c22d-356d-4c49-bc7f-054f480770ec" (UID: "d8f9c22d-356d-4c49-bc7f-054f480770ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.959124 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkclf\" (UniqueName: \"kubernetes.io/projected/d8f9c22d-356d-4c49-bc7f-054f480770ec-kube-api-access-gkclf\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.959207 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.959237 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.959329 4910 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.960603 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8f9c22d-356d-4c49-bc7f-054f480770ec" (UID: "d8f9c22d-356d-4c49-bc7f-054f480770ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.965486 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8f9c22d-356d-4c49-bc7f-054f480770ec" (UID: "d8f9c22d-356d-4c49-bc7f-054f480770ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:15 crc kubenswrapper[4910]: I0226 22:18:15.990022 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.005156 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.018282 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:18:16 crc kubenswrapper[4910]: E0226 22:18:16.018807 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" containerName="cloudkitty-proc" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.018827 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" containerName="cloudkitty-proc" Feb 26 22:18:16 crc kubenswrapper[4910]: E0226 22:18:16.018851 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f9c22d-356d-4c49-bc7f-054f480770ec" containerName="dnsmasq-dns" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.018858 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f9c22d-356d-4c49-bc7f-054f480770ec" containerName="dnsmasq-dns" Feb 26 22:18:16 crc kubenswrapper[4910]: E0226 22:18:16.018891 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f9c22d-356d-4c49-bc7f-054f480770ec" containerName="init" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.018898 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f9c22d-356d-4c49-bc7f-054f480770ec" containerName="init" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.019071 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" containerName="cloudkitty-proc" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.019097 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f9c22d-356d-4c49-bc7f-054f480770ec" containerName="dnsmasq-dns" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.019849 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.022927 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.025464 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.063812 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.063842 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8f9c22d-356d-4c49-bc7f-054f480770ec-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.165639 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.165713 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-scripts\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.165766 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.165782 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td59p\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-kube-api-access-td59p\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.165812 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.165896 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-certs\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.194820 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74c6855597-x8m7j" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.252226 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-799bb5d856-9h5p7"] Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.252734 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-799bb5d856-9h5p7" podUID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerName="neutron-api" containerID="cri-o://5bea8fd8f5c64b550c53ab54de57b8f9670434af453503f202f06a0bd833920c" gracePeriod=30 Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.252984 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-799bb5d856-9h5p7" podUID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerName="neutron-httpd" containerID="cri-o://5cecc1f4311b11b9736f89a01c748a8125b934e1487389492770c33be995c697" gracePeriod=30 Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.267888 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-certs\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.267963 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.268012 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-scripts\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.268044 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.268063 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td59p\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-kube-api-access-td59p\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.268089 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.273106 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-scripts\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.274195 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-certs\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.275097 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.275651 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.279234 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.285691 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td59p\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-kube-api-access-td59p\") pod \"cloudkitty-proc-0\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.356902 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:18:16 crc kubenswrapper[4910]: E0226 22:18:16.690494 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfbe459_2ae5_4d85_9d94_7aeb0c845ead.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88d3a1d1_ec96_476e_80bc_ad3784a06411.slice/crio-conmon-5cecc1f4311b11b9736f89a01c748a8125b934e1487389492770c33be995c697.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.702195 4910 generic.go:334] "Generic (PLEG): container finished" podID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerID="5cecc1f4311b11b9736f89a01c748a8125b934e1487389492770c33be995c697" exitCode=0 Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.702244 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799bb5d856-9h5p7" event={"ID":"88d3a1d1-ec96-476e-80bc-ad3784a06411","Type":"ContainerDied","Data":"5cecc1f4311b11b9736f89a01c748a8125b934e1487389492770c33be995c697"} Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.704947 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.705564 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9b8rx" event={"ID":"d8f9c22d-356d-4c49-bc7f-054f480770ec","Type":"ContainerDied","Data":"16c3b8ea06d7b468ee46eb1e615cc3dbfda5a0b0a55491481891470555d6cfe6"} Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.705634 4910 scope.go:117] "RemoveContainer" containerID="9f86891f1b4382c0250b50f5bd259177d00860c59e31f57ee763293c4d35c44c" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.758892 4910 scope.go:117] "RemoveContainer" containerID="d258b7cc7539c8b5434ef56335bac02a21cc0cb040e8f1a690c49f5d4b8c5d04" Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.763629 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9b8rx"] Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.789957 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9b8rx"] Feb 26 22:18:16 crc kubenswrapper[4910]: I0226 22:18:16.888692 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.230618 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.230711 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6fc88b699f-nwbd7"] Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.233081 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.234899 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.235631 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.235851 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.239910 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6fc88b699f-nwbd7"] Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.397178 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-config-data\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.397215 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1277dd6e-0e1b-4693-9923-a5915b981d6d-etc-swift\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.397239 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-combined-ca-bundle\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.397256 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-internal-tls-certs\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.398539 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1277dd6e-0e1b-4693-9923-a5915b981d6d-log-httpd\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.398629 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l77cn\" (UniqueName: \"kubernetes.io/projected/1277dd6e-0e1b-4693-9923-a5915b981d6d-kube-api-access-l77cn\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.398867 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1277dd6e-0e1b-4693-9923-a5915b981d6d-run-httpd\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.398897 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-public-tls-certs\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.500394 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1277dd6e-0e1b-4693-9923-a5915b981d6d-run-httpd\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.500428 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-public-tls-certs\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.500470 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-config-data\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.500489 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1277dd6e-0e1b-4693-9923-a5915b981d6d-etc-swift\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.500512 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-combined-ca-bundle\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.500527 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-internal-tls-certs\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.500577 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1277dd6e-0e1b-4693-9923-a5915b981d6d-log-httpd\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.500601 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l77cn\" (UniqueName: \"kubernetes.io/projected/1277dd6e-0e1b-4693-9923-a5915b981d6d-kube-api-access-l77cn\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.501435 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1277dd6e-0e1b-4693-9923-a5915b981d6d-log-httpd\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.501860 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1277dd6e-0e1b-4693-9923-a5915b981d6d-run-httpd\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.505966 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1277dd6e-0e1b-4693-9923-a5915b981d6d-etc-swift\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.509767 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-combined-ca-bundle\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.512909 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-internal-tls-certs\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.520385 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-public-tls-certs\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.529867 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l77cn\" (UniqueName: \"kubernetes.io/projected/1277dd6e-0e1b-4693-9923-a5915b981d6d-kube-api-access-l77cn\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.538958 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1277dd6e-0e1b-4693-9923-a5915b981d6d-config-data\") pod \"swift-proxy-6fc88b699f-nwbd7\" (UID: \"1277dd6e-0e1b-4693-9923-a5915b981d6d\") " pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.564989 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.749298 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"6d3b6f00-440e-46b6-a08a-a3219e244da6","Type":"ContainerStarted","Data":"4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6"} Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.749573 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"6d3b6f00-440e-46b6-a08a-a3219e244da6","Type":"ContainerStarted","Data":"6add6f037b275386fb2ecf7903290d89581906fd2507547f8ed14cbd022a1302"} Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.777532 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.777512584 podStartE2EDuration="2.777512584s" podCreationTimestamp="2026-02-26 22:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:17.763153243 +0000 UTC m=+1382.842643784" watchObservedRunningTime="2026-02-26 22:18:17.777512584 +0000 UTC m=+1382.857003125" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.914743 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f9c22d-356d-4c49-bc7f-054f480770ec" path="/var/lib/kubelet/pods/d8f9c22d-356d-4c49-bc7f-054f480770ec/volumes" Feb 26 22:18:17 crc kubenswrapper[4910]: I0226 22:18:17.915340 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfbe459-2ae5-4d85-9d94-7aeb0c845ead" path="/var/lib/kubelet/pods/fdfbe459-2ae5-4d85-9d94-7aeb0c845ead/volumes" Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.109756 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6fc88b699f-nwbd7"] Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.836789 4910 generic.go:334] "Generic (PLEG): container finished" podID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerID="5bea8fd8f5c64b550c53ab54de57b8f9670434af453503f202f06a0bd833920c" exitCode=0 Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.837211 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799bb5d856-9h5p7" event={"ID":"88d3a1d1-ec96-476e-80bc-ad3784a06411","Type":"ContainerDied","Data":"5bea8fd8f5c64b550c53ab54de57b8f9670434af453503f202f06a0bd833920c"} Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.874457 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6fc88b699f-nwbd7" event={"ID":"1277dd6e-0e1b-4693-9923-a5915b981d6d","Type":"ContainerStarted","Data":"4925fb7fd630b30d31942ff18979b4e78ba2a7b180a621306152911fc5234cb3"} Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.874495 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6fc88b699f-nwbd7" event={"ID":"1277dd6e-0e1b-4693-9923-a5915b981d6d","Type":"ContainerStarted","Data":"1e99054340d11e733be1cdba8629ea5e460abb35d2549beb42917d9d866b0701"} Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.883248 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.962283 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm7lh\" (UniqueName: \"kubernetes.io/projected/88d3a1d1-ec96-476e-80bc-ad3784a06411-kube-api-access-nm7lh\") pod \"88d3a1d1-ec96-476e-80bc-ad3784a06411\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.962415 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-ovndb-tls-certs\") pod \"88d3a1d1-ec96-476e-80bc-ad3784a06411\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.962450 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-httpd-config\") pod \"88d3a1d1-ec96-476e-80bc-ad3784a06411\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.962590 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-config\") pod \"88d3a1d1-ec96-476e-80bc-ad3784a06411\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.962724 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-combined-ca-bundle\") pod \"88d3a1d1-ec96-476e-80bc-ad3784a06411\" (UID: \"88d3a1d1-ec96-476e-80bc-ad3784a06411\") " Feb 26 22:18:18 crc kubenswrapper[4910]: I0226 22:18:18.999040 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "88d3a1d1-ec96-476e-80bc-ad3784a06411" (UID: "88d3a1d1-ec96-476e-80bc-ad3784a06411"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.007152 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d3a1d1-ec96-476e-80bc-ad3784a06411-kube-api-access-nm7lh" (OuterVolumeSpecName: "kube-api-access-nm7lh") pod "88d3a1d1-ec96-476e-80bc-ad3784a06411" (UID: "88d3a1d1-ec96-476e-80bc-ad3784a06411"). InnerVolumeSpecName "kube-api-access-nm7lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.051663 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-r8zdl"] Feb 26 22:18:19 crc kubenswrapper[4910]: E0226 22:18:19.052196 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerName="neutron-api" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.052214 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerName="neutron-api" Feb 26 22:18:19 crc kubenswrapper[4910]: E0226 22:18:19.052241 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerName="neutron-httpd" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.052249 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerName="neutron-httpd" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.052493 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerName="neutron-api" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.052510 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d3a1d1-ec96-476e-80bc-ad3784a06411" containerName="neutron-httpd" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.058723 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.066033 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm7lh\" (UniqueName: \"kubernetes.io/projected/88d3a1d1-ec96-476e-80bc-ad3784a06411-kube-api-access-nm7lh\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.066277 4910 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.124007 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-r8zdl"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.134110 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88d3a1d1-ec96-476e-80bc-ad3784a06411" (UID: "88d3a1d1-ec96-476e-80bc-ad3784a06411"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.168497 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316f1fec-ce34-4d56-81c2-2500efe83251-operator-scripts\") pod \"nova-api-db-create-r8zdl\" (UID: \"316f1fec-ce34-4d56-81c2-2500efe83251\") " pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.168572 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfqdz\" (UniqueName: \"kubernetes.io/projected/316f1fec-ce34-4d56-81c2-2500efe83251-kube-api-access-zfqdz\") pod \"nova-api-db-create-r8zdl\" (UID: \"316f1fec-ce34-4d56-81c2-2500efe83251\") " pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.168622 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.172434 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-config" (OuterVolumeSpecName: "config") pod "88d3a1d1-ec96-476e-80bc-ad3784a06411" (UID: "88d3a1d1-ec96-476e-80bc-ad3784a06411"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.212881 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7tdc4"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.229678 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.269836 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7tdc4"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.271190 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfqdz\" (UniqueName: \"kubernetes.io/projected/316f1fec-ce34-4d56-81c2-2500efe83251-kube-api-access-zfqdz\") pod \"nova-api-db-create-r8zdl\" (UID: \"316f1fec-ce34-4d56-81c2-2500efe83251\") " pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.271249 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-operator-scripts\") pod \"nova-cell0-db-create-7tdc4\" (UID: \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\") " pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.277903 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z6h5\" (UniqueName: \"kubernetes.io/projected/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-kube-api-access-4z6h5\") pod \"nova-cell0-db-create-7tdc4\" (UID: \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\") " pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.277956 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316f1fec-ce34-4d56-81c2-2500efe83251-operator-scripts\") pod \"nova-api-db-create-r8zdl\" (UID: \"316f1fec-ce34-4d56-81c2-2500efe83251\") " pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.278069 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.279081 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316f1fec-ce34-4d56-81c2-2500efe83251-operator-scripts\") pod \"nova-api-db-create-r8zdl\" (UID: \"316f1fec-ce34-4d56-81c2-2500efe83251\") " pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.289673 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "88d3a1d1-ec96-476e-80bc-ad3784a06411" (UID: "88d3a1d1-ec96-476e-80bc-ad3784a06411"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.308800 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfqdz\" (UniqueName: \"kubernetes.io/projected/316f1fec-ce34-4d56-81c2-2500efe83251-kube-api-access-zfqdz\") pod \"nova-api-db-create-r8zdl\" (UID: \"316f1fec-ce34-4d56-81c2-2500efe83251\") " pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.320215 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8cae-account-create-update-48ftl"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.321573 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.324426 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.356483 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8cae-account-create-update-48ftl"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.379372 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcx9v\" (UniqueName: \"kubernetes.io/projected/354e979d-961b-468d-bd22-d3779ddf79e7-kube-api-access-wcx9v\") pod \"nova-api-8cae-account-create-update-48ftl\" (UID: \"354e979d-961b-468d-bd22-d3779ddf79e7\") " pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.379474 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/354e979d-961b-468d-bd22-d3779ddf79e7-operator-scripts\") pod \"nova-api-8cae-account-create-update-48ftl\" (UID: \"354e979d-961b-468d-bd22-d3779ddf79e7\") " pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.379503 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z6h5\" (UniqueName: \"kubernetes.io/projected/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-kube-api-access-4z6h5\") pod \"nova-cell0-db-create-7tdc4\" (UID: \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\") " pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.379584 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-operator-scripts\") pod \"nova-cell0-db-create-7tdc4\" (UID: \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\") " pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.379649 4910 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/88d3a1d1-ec96-476e-80bc-ad3784a06411-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.380432 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-operator-scripts\") pod \"nova-cell0-db-create-7tdc4\" (UID: \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\") " pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.427046 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.459500 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6vgtc"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.487285 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/354e979d-961b-468d-bd22-d3779ddf79e7-operator-scripts\") pod \"nova-api-8cae-account-create-update-48ftl\" (UID: \"354e979d-961b-468d-bd22-d3779ddf79e7\") " pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.487442 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcx9v\" (UniqueName: \"kubernetes.io/projected/354e979d-961b-468d-bd22-d3779ddf79e7-kube-api-access-wcx9v\") pod \"nova-api-8cae-account-create-update-48ftl\" (UID: \"354e979d-961b-468d-bd22-d3779ddf79e7\") " pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.488413 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/354e979d-961b-468d-bd22-d3779ddf79e7-operator-scripts\") pod \"nova-api-8cae-account-create-update-48ftl\" (UID: \"354e979d-961b-468d-bd22-d3779ddf79e7\") " pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.492883 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6vgtc"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.492989 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.500730 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z6h5\" (UniqueName: \"kubernetes.io/projected/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-kube-api-access-4z6h5\") pod \"nova-cell0-db-create-7tdc4\" (UID: \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\") " pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.531372 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-09b2-account-create-update-dslwp"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.532775 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.539650 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-09b2-account-create-update-dslwp"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.551661 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.560249 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.573450 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcx9v\" (UniqueName: \"kubernetes.io/projected/354e979d-961b-468d-bd22-d3779ddf79e7-kube-api-access-wcx9v\") pod \"nova-api-8cae-account-create-update-48ftl\" (UID: \"354e979d-961b-468d-bd22-d3779ddf79e7\") " pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.575909 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.588689 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngv7s\" (UniqueName: \"kubernetes.io/projected/ed5b19d3-3822-4ad6-bece-dae55bdafa17-kube-api-access-ngv7s\") pod \"nova-cell1-db-create-6vgtc\" (UID: \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\") " pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.588727 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8x9g\" (UniqueName: \"kubernetes.io/projected/28366851-2ead-459f-a372-eb59160e674a-kube-api-access-d8x9g\") pod \"nova-cell0-09b2-account-create-update-dslwp\" (UID: \"28366851-2ead-459f-a372-eb59160e674a\") " pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.588779 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28366851-2ead-459f-a372-eb59160e674a-operator-scripts\") pod \"nova-cell0-09b2-account-create-update-dslwp\" (UID: \"28366851-2ead-459f-a372-eb59160e674a\") " pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.588915 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5b19d3-3822-4ad6-bece-dae55bdafa17-operator-scripts\") pod \"nova-cell1-db-create-6vgtc\" (UID: \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\") " pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.692230 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5b19d3-3822-4ad6-bece-dae55bdafa17-operator-scripts\") pod \"nova-cell1-db-create-6vgtc\" (UID: \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\") " pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.692504 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngv7s\" (UniqueName: \"kubernetes.io/projected/ed5b19d3-3822-4ad6-bece-dae55bdafa17-kube-api-access-ngv7s\") pod \"nova-cell1-db-create-6vgtc\" (UID: \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\") " pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.692527 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8x9g\" (UniqueName: \"kubernetes.io/projected/28366851-2ead-459f-a372-eb59160e674a-kube-api-access-d8x9g\") pod \"nova-cell0-09b2-account-create-update-dslwp\" (UID: \"28366851-2ead-459f-a372-eb59160e674a\") " pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.692568 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28366851-2ead-459f-a372-eb59160e674a-operator-scripts\") pod \"nova-cell0-09b2-account-create-update-dslwp\" (UID: \"28366851-2ead-459f-a372-eb59160e674a\") " pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.696873 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5b19d3-3822-4ad6-bece-dae55bdafa17-operator-scripts\") pod \"nova-cell1-db-create-6vgtc\" (UID: \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\") " pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.712313 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28366851-2ead-459f-a372-eb59160e674a-operator-scripts\") pod \"nova-cell0-09b2-account-create-update-dslwp\" (UID: \"28366851-2ead-459f-a372-eb59160e674a\") " pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.741332 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-361a-account-create-update-c844r"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.742961 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngv7s\" (UniqueName: \"kubernetes.io/projected/ed5b19d3-3822-4ad6-bece-dae55bdafa17-kube-api-access-ngv7s\") pod \"nova-cell1-db-create-6vgtc\" (UID: \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\") " pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.757685 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.796882 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bee5f61-2821-472a-806f-2474c1171a27-operator-scripts\") pod \"nova-cell1-361a-account-create-update-c844r\" (UID: \"3bee5f61-2821-472a-806f-2474c1171a27\") " pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.797434 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szsgb\" (UniqueName: \"kubernetes.io/projected/3bee5f61-2821-472a-806f-2474c1171a27-kube-api-access-szsgb\") pod \"nova-cell1-361a-account-create-update-c844r\" (UID: \"3bee5f61-2821-472a-806f-2474c1171a27\") " pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.806644 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8x9g\" (UniqueName: \"kubernetes.io/projected/28366851-2ead-459f-a372-eb59160e674a-kube-api-access-d8x9g\") pod \"nova-cell0-09b2-account-create-update-dslwp\" (UID: \"28366851-2ead-459f-a372-eb59160e674a\") " pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.807070 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.824555 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-361a-account-create-update-c844r"] Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.888582 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.898912 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szsgb\" (UniqueName: \"kubernetes.io/projected/3bee5f61-2821-472a-806f-2474c1171a27-kube-api-access-szsgb\") pod \"nova-cell1-361a-account-create-update-c844r\" (UID: \"3bee5f61-2821-472a-806f-2474c1171a27\") " pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.908970 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.922939 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bee5f61-2821-472a-806f-2474c1171a27-operator-scripts\") pod \"nova-cell1-361a-account-create-update-c844r\" (UID: \"3bee5f61-2821-472a-806f-2474c1171a27\") " pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.923543 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bee5f61-2821-472a-806f-2474c1171a27-operator-scripts\") pod \"nova-cell1-361a-account-create-update-c844r\" (UID: \"3bee5f61-2821-472a-806f-2474c1171a27\") " pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.940290 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szsgb\" (UniqueName: \"kubernetes.io/projected/3bee5f61-2821-472a-806f-2474c1171a27-kube-api-access-szsgb\") pod \"nova-cell1-361a-account-create-update-c844r\" (UID: \"3bee5f61-2821-472a-806f-2474c1171a27\") " pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.948530 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-799bb5d856-9h5p7" Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.952683 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-799bb5d856-9h5p7" event={"ID":"88d3a1d1-ec96-476e-80bc-ad3784a06411","Type":"ContainerDied","Data":"2706d79aad95948e5be912ed574dedeedb568dec049d9babb7583a38a001858b"} Feb 26 22:18:19 crc kubenswrapper[4910]: I0226 22:18:19.952730 4910 scope.go:117] "RemoveContainer" containerID="5cecc1f4311b11b9736f89a01c748a8125b934e1487389492770c33be995c697" Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.027869 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6fc88b699f-nwbd7" event={"ID":"1277dd6e-0e1b-4693-9923-a5915b981d6d","Type":"ContainerStarted","Data":"6b1358be31cc54ab65c7bde67d511652c8af790feb239c159f7d99a20faddc2d"} Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.028324 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.028389 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.116403 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6fc88b699f-nwbd7" podStartSLOduration=3.116378945 podStartE2EDuration="3.116378945s" podCreationTimestamp="2026-02-26 22:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:20.072312235 +0000 UTC m=+1385.151802776" watchObservedRunningTime="2026-02-26 22:18:20.116378945 +0000 UTC m=+1385.195869486" Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.118238 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-r8zdl"] Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.151241 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-799bb5d856-9h5p7"] Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.155323 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.164203 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-799bb5d856-9h5p7"] Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.234071 4910 scope.go:117] "RemoveContainer" containerID="5bea8fd8f5c64b550c53ab54de57b8f9670434af453503f202f06a0bd833920c" Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.467763 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8cae-account-create-update-48ftl"] Feb 26 22:18:20 crc kubenswrapper[4910]: W0226 22:18:20.473501 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e979d_961b_468d_bd22_d3779ddf79e7.slice/crio-bdefefdc3f4fe2ed524f0e07359929b08cac65ef01405392e2f6f2ff132c7e38 WatchSource:0}: Error finding container bdefefdc3f4fe2ed524f0e07359929b08cac65ef01405392e2f6f2ff132c7e38: Status 404 returned error can't find the container with id bdefefdc3f4fe2ed524f0e07359929b08cac65ef01405392e2f6f2ff132c7e38 Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.648570 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7tdc4"] Feb 26 22:18:20 crc kubenswrapper[4910]: W0226 22:18:20.660343 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0003bc46_cac8_43e2_af3d_8018cfd4ab2d.slice/crio-cac7687e0bcc996b6ed0053191291ab400ec508fba1962b0f6eae7882153e2c4 WatchSource:0}: Error finding container cac7687e0bcc996b6ed0053191291ab400ec508fba1962b0f6eae7882153e2c4: Status 404 returned error can't find the container with id cac7687e0bcc996b6ed0053191291ab400ec508fba1962b0f6eae7882153e2c4 Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.888613 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-09b2-account-create-update-dslwp"] Feb 26 22:18:20 crc kubenswrapper[4910]: W0226 22:18:20.898320 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28366851_2ead_459f_a372_eb59160e674a.slice/crio-9b185fce6c4d99c0dd1b4a5ba4b6f24f1b25a7028a76b9053de90b98e993f8c5 WatchSource:0}: Error finding container 9b185fce6c4d99c0dd1b4a5ba4b6f24f1b25a7028a76b9053de90b98e993f8c5: Status 404 returned error can't find the container with id 9b185fce6c4d99c0dd1b4a5ba4b6f24f1b25a7028a76b9053de90b98e993f8c5 Feb 26 22:18:20 crc kubenswrapper[4910]: I0226 22:18:20.969327 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.045344 4910 generic.go:334] "Generic (PLEG): container finished" podID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerID="541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92" exitCode=137 Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.045400 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a","Type":"ContainerDied","Data":"541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92"} Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.045428 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a","Type":"ContainerDied","Data":"191c1723a14881e5feb5e81be2b31fdd1163482e930660f7b7051c25fb03f0ec"} Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.045445 4910 scope.go:117] "RemoveContainer" containerID="541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.045613 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.069113 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8cae-account-create-update-48ftl" event={"ID":"354e979d-961b-468d-bd22-d3779ddf79e7","Type":"ContainerStarted","Data":"19dda5e010d2d61595446b6b0193bc3d9a4506c7feafcb0975dd07191a5637bf"} Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.069155 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8cae-account-create-update-48ftl" event={"ID":"354e979d-961b-468d-bd22-d3779ddf79e7","Type":"ContainerStarted","Data":"bdefefdc3f4fe2ed524f0e07359929b08cac65ef01405392e2f6f2ff132c7e38"} Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.071809 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-logs\") pod \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.071859 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdsfl\" (UniqueName: \"kubernetes.io/projected/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-kube-api-access-jdsfl\") pod \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.071929 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data-custom\") pod \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.072026 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-scripts\") pod \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.072046 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-combined-ca-bundle\") pod \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.072194 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-etc-machine-id\") pod \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.072266 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data\") pod \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\" (UID: \"10476c3d-2ccd-4ffd-9bef-9b6d82b4316a\") " Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.073795 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" (UID: "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.073979 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-logs" (OuterVolumeSpecName: "logs") pod "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" (UID: "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.080409 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" (UID: "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.084839 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-scripts" (OuterVolumeSpecName: "scripts") pod "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" (UID: "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.087713 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-kube-api-access-jdsfl" (OuterVolumeSpecName: "kube-api-access-jdsfl") pod "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" (UID: "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a"). InnerVolumeSpecName "kube-api-access-jdsfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.107307 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-09b2-account-create-update-dslwp" event={"ID":"28366851-2ead-459f-a372-eb59160e674a","Type":"ContainerStarted","Data":"9b185fce6c4d99c0dd1b4a5ba4b6f24f1b25a7028a76b9053de90b98e993f8c5"} Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.112127 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-8cae-account-create-update-48ftl" podStartSLOduration=2.112106091 podStartE2EDuration="2.112106091s" podCreationTimestamp="2026-02-26 22:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:21.09301855 +0000 UTC m=+1386.172509081" watchObservedRunningTime="2026-02-26 22:18:21.112106091 +0000 UTC m=+1386.191596642" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.117534 4910 scope.go:117] "RemoveContainer" containerID="05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.122555 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" (UID: "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.123777 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r8zdl" event={"ID":"316f1fec-ce34-4d56-81c2-2500efe83251","Type":"ContainerStarted","Data":"04584fc1579d7d9f0eff0239465a96942c1774112a9abd7c8644a2a0c2debb4c"} Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.123828 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r8zdl" event={"ID":"316f1fec-ce34-4d56-81c2-2500efe83251","Type":"ContainerStarted","Data":"45f63fb2fd016a5a19867837c2c6f2f4ff9e07c25aeff21228277dc7b18dba65"} Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.148673 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7tdc4" event={"ID":"0003bc46-cac8-43e2-af3d-8018cfd4ab2d","Type":"ContainerStarted","Data":"759f7ee2eb99817164d26f3e0cfdc42e43888121aea7bce3e2319bea0d130c4f"} Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.148712 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7tdc4" event={"ID":"0003bc46-cac8-43e2-af3d-8018cfd4ab2d","Type":"ContainerStarted","Data":"cac7687e0bcc996b6ed0053191291ab400ec508fba1962b0f6eae7882153e2c4"} Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.175136 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.175182 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdsfl\" (UniqueName: \"kubernetes.io/projected/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-kube-api-access-jdsfl\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.175194 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.175204 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.175212 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.175219 4910 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.177473 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-7tdc4" podStartSLOduration=2.177453942 podStartE2EDuration="2.177453942s" podCreationTimestamp="2026-02-26 22:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:21.16563481 +0000 UTC m=+1386.245125351" watchObservedRunningTime="2026-02-26 22:18:21.177453942 +0000 UTC m=+1386.256944483" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.184403 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data" (OuterVolumeSpecName: "config-data") pod "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" (UID: "10476c3d-2ccd-4ffd-9bef-9b6d82b4316a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.276446 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.288709 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-361a-account-create-update-c844r"] Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.320876 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6vgtc"] Feb 26 22:18:21 crc kubenswrapper[4910]: W0226 22:18:21.363273 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded5b19d3_3822_4ad6_bece_dae55bdafa17.slice/crio-b4b68963fcb811e0e7efe12064b08063107bd9f28cb57fcaf8a4b6a8f9d2dfcf WatchSource:0}: Error finding container b4b68963fcb811e0e7efe12064b08063107bd9f28cb57fcaf8a4b6a8f9d2dfcf: Status 404 returned error can't find the container with id b4b68963fcb811e0e7efe12064b08063107bd9f28cb57fcaf8a4b6a8f9d2dfcf Feb 26 22:18:21 crc kubenswrapper[4910]: W0226 22:18:21.379729 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bee5f61_2821_472a_806f_2474c1171a27.slice/crio-e6d16474dfcf917e3e1d0a69d5d81cdf67d1fda088a766375b63269494094151 WatchSource:0}: Error finding container e6d16474dfcf917e3e1d0a69d5d81cdf67d1fda088a766375b63269494094151: Status 404 returned error can't find the container with id e6d16474dfcf917e3e1d0a69d5d81cdf67d1fda088a766375b63269494094151 Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.433232 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.461037 4910 scope.go:117] "RemoveContainer" containerID="541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92" Feb 26 22:18:21 crc kubenswrapper[4910]: E0226 22:18:21.463283 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92\": container with ID starting with 541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92 not found: ID does not exist" containerID="541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.463318 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92"} err="failed to get container status \"541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92\": rpc error: code = NotFound desc = could not find container \"541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92\": container with ID starting with 541516b7089210b60a48c826c48931b808236dc1f4303461c52573fd4af61f92 not found: ID does not exist" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.463343 4910 scope.go:117] "RemoveContainer" containerID="05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.470065 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 26 22:18:21 crc kubenswrapper[4910]: E0226 22:18:21.474345 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217\": container with ID starting with 05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217 not found: ID does not exist" containerID="05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.474420 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217"} err="failed to get container status \"05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217\": rpc error: code = NotFound desc = could not find container \"05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217\": container with ID starting with 05270efdc25c9c6e59de40613af4951e65f50a4d07ad7586911f579a6b687217 not found: ID does not exist" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.588310 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 22:18:21 crc kubenswrapper[4910]: E0226 22:18:21.589370 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerName="cinder-api-log" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.589393 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerName="cinder-api-log" Feb 26 22:18:21 crc kubenswrapper[4910]: E0226 22:18:21.589418 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerName="cinder-api" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.589425 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerName="cinder-api" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.589824 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerName="cinder-api" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.589862 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" containerName="cinder-api-log" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.604418 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.610665 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.622660 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.623770 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.623838 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.730532 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjwnm\" (UniqueName: \"kubernetes.io/projected/3f15d8ce-24ee-4d25-a0d8-b9c659220644-kube-api-access-gjwnm\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.730800 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-config-data\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.730825 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-scripts\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.730896 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-config-data-custom\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.730913 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.730930 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f15d8ce-24ee-4d25-a0d8-b9c659220644-logs\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.730957 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.730998 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.731069 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f15d8ce-24ee-4d25-a0d8-b9c659220644-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.835370 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.835485 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.835637 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f15d8ce-24ee-4d25-a0d8-b9c659220644-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.835720 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjwnm\" (UniqueName: \"kubernetes.io/projected/3f15d8ce-24ee-4d25-a0d8-b9c659220644-kube-api-access-gjwnm\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.835756 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-config-data\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.835793 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-scripts\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.835885 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-config-data-custom\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.835910 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.835932 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f15d8ce-24ee-4d25-a0d8-b9c659220644-logs\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.836373 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f15d8ce-24ee-4d25-a0d8-b9c659220644-logs\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.836862 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f15d8ce-24ee-4d25-a0d8-b9c659220644-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.842204 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.843433 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-config-data\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.844887 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.844984 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.846438 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-config-data-custom\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.848874 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f15d8ce-24ee-4d25-a0d8-b9c659220644-scripts\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.866338 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjwnm\" (UniqueName: \"kubernetes.io/projected/3f15d8ce-24ee-4d25-a0d8-b9c659220644-kube-api-access-gjwnm\") pod \"cinder-api-0\" (UID: \"3f15d8ce-24ee-4d25-a0d8-b9c659220644\") " pod="openstack/cinder-api-0" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.918061 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10476c3d-2ccd-4ffd-9bef-9b6d82b4316a" path="/var/lib/kubelet/pods/10476c3d-2ccd-4ffd-9bef-9b6d82b4316a/volumes" Feb 26 22:18:21 crc kubenswrapper[4910]: I0226 22:18:21.919185 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d3a1d1-ec96-476e-80bc-ad3784a06411" path="/var/lib/kubelet/pods/88d3a1d1-ec96-476e-80bc-ad3784a06411/volumes" Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.020793 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.172347 4910 generic.go:334] "Generic (PLEG): container finished" podID="ed5b19d3-3822-4ad6-bece-dae55bdafa17" containerID="cff52b7f3641cd791bf33a32cc3edaeede0c73e08a41cbffdd8ff855d4a79f75" exitCode=0 Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.173005 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6vgtc" event={"ID":"ed5b19d3-3822-4ad6-bece-dae55bdafa17","Type":"ContainerDied","Data":"cff52b7f3641cd791bf33a32cc3edaeede0c73e08a41cbffdd8ff855d4a79f75"} Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.173032 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6vgtc" event={"ID":"ed5b19d3-3822-4ad6-bece-dae55bdafa17","Type":"ContainerStarted","Data":"b4b68963fcb811e0e7efe12064b08063107bd9f28cb57fcaf8a4b6a8f9d2dfcf"} Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.174356 4910 generic.go:334] "Generic (PLEG): container finished" podID="3bee5f61-2821-472a-806f-2474c1171a27" containerID="fd38f207fbee22a39fa98c596b78d42b75115d5ee133b0aeedf14aa4c2464cd3" exitCode=0 Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.174411 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-361a-account-create-update-c844r" event={"ID":"3bee5f61-2821-472a-806f-2474c1171a27","Type":"ContainerDied","Data":"fd38f207fbee22a39fa98c596b78d42b75115d5ee133b0aeedf14aa4c2464cd3"} Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.174437 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-361a-account-create-update-c844r" event={"ID":"3bee5f61-2821-472a-806f-2474c1171a27","Type":"ContainerStarted","Data":"e6d16474dfcf917e3e1d0a69d5d81cdf67d1fda088a766375b63269494094151"} Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.175498 4910 generic.go:334] "Generic (PLEG): container finished" podID="28366851-2ead-459f-a372-eb59160e674a" containerID="19bed303ebb4ecb19af5c18cb7029856e4ab71817c80fa9be6c0ddbd504edf8c" exitCode=0 Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.175531 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-09b2-account-create-update-dslwp" event={"ID":"28366851-2ead-459f-a372-eb59160e674a","Type":"ContainerDied","Data":"19bed303ebb4ecb19af5c18cb7029856e4ab71817c80fa9be6c0ddbd504edf8c"} Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.176584 4910 generic.go:334] "Generic (PLEG): container finished" podID="316f1fec-ce34-4d56-81c2-2500efe83251" containerID="04584fc1579d7d9f0eff0239465a96942c1774112a9abd7c8644a2a0c2debb4c" exitCode=0 Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.176617 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r8zdl" event={"ID":"316f1fec-ce34-4d56-81c2-2500efe83251","Type":"ContainerDied","Data":"04584fc1579d7d9f0eff0239465a96942c1774112a9abd7c8644a2a0c2debb4c"} Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.187334 4910 generic.go:334] "Generic (PLEG): container finished" podID="0003bc46-cac8-43e2-af3d-8018cfd4ab2d" containerID="759f7ee2eb99817164d26f3e0cfdc42e43888121aea7bce3e2319bea0d130c4f" exitCode=0 Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.187394 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7tdc4" event={"ID":"0003bc46-cac8-43e2-af3d-8018cfd4ab2d","Type":"ContainerDied","Data":"759f7ee2eb99817164d26f3e0cfdc42e43888121aea7bce3e2319bea0d130c4f"} Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.212683 4910 generic.go:334] "Generic (PLEG): container finished" podID="354e979d-961b-468d-bd22-d3779ddf79e7" containerID="19dda5e010d2d61595446b6b0193bc3d9a4506c7feafcb0975dd07191a5637bf" exitCode=0 Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.212723 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8cae-account-create-update-48ftl" event={"ID":"354e979d-961b-468d-bd22-d3779ddf79e7","Type":"ContainerDied","Data":"19dda5e010d2d61595446b6b0193bc3d9a4506c7feafcb0975dd07191a5637bf"} Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.479386 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 22:18:22 crc kubenswrapper[4910]: W0226 22:18:22.481597 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f15d8ce_24ee_4d25_a0d8_b9c659220644.slice/crio-f0e3b0b4afc5993e59272502e58bded4f91519d28857443838c4baf706d7f3b1 WatchSource:0}: Error finding container f0e3b0b4afc5993e59272502e58bded4f91519d28857443838c4baf706d7f3b1: Status 404 returned error can't find the container with id f0e3b0b4afc5993e59272502e58bded4f91519d28857443838c4baf706d7f3b1 Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.618503 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.761918 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316f1fec-ce34-4d56-81c2-2500efe83251-operator-scripts\") pod \"316f1fec-ce34-4d56-81c2-2500efe83251\" (UID: \"316f1fec-ce34-4d56-81c2-2500efe83251\") " Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.761983 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfqdz\" (UniqueName: \"kubernetes.io/projected/316f1fec-ce34-4d56-81c2-2500efe83251-kube-api-access-zfqdz\") pod \"316f1fec-ce34-4d56-81c2-2500efe83251\" (UID: \"316f1fec-ce34-4d56-81c2-2500efe83251\") " Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.762802 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/316f1fec-ce34-4d56-81c2-2500efe83251-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "316f1fec-ce34-4d56-81c2-2500efe83251" (UID: "316f1fec-ce34-4d56-81c2-2500efe83251"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.763085 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316f1fec-ce34-4d56-81c2-2500efe83251-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.768471 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316f1fec-ce34-4d56-81c2-2500efe83251-kube-api-access-zfqdz" (OuterVolumeSpecName: "kube-api-access-zfqdz") pod "316f1fec-ce34-4d56-81c2-2500efe83251" (UID: "316f1fec-ce34-4d56-81c2-2500efe83251"). InnerVolumeSpecName "kube-api-access-zfqdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:22 crc kubenswrapper[4910]: I0226 22:18:22.865368 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfqdz\" (UniqueName: \"kubernetes.io/projected/316f1fec-ce34-4d56-81c2-2500efe83251-kube-api-access-zfqdz\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:23 crc kubenswrapper[4910]: I0226 22:18:23.226065 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r8zdl" event={"ID":"316f1fec-ce34-4d56-81c2-2500efe83251","Type":"ContainerDied","Data":"45f63fb2fd016a5a19867837c2c6f2f4ff9e07c25aeff21228277dc7b18dba65"} Feb 26 22:18:23 crc kubenswrapper[4910]: I0226 22:18:23.226099 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45f63fb2fd016a5a19867837c2c6f2f4ff9e07c25aeff21228277dc7b18dba65" Feb 26 22:18:23 crc kubenswrapper[4910]: I0226 22:18:23.226184 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r8zdl" Feb 26 22:18:23 crc kubenswrapper[4910]: I0226 22:18:23.239145 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3f15d8ce-24ee-4d25-a0d8-b9c659220644","Type":"ContainerStarted","Data":"4c4de5a127794b6d3d9d02d2d80270038ab0bf3881e87e3cfae7bf1a000fc559"} Feb 26 22:18:23 crc kubenswrapper[4910]: I0226 22:18:23.239491 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3f15d8ce-24ee-4d25-a0d8-b9c659220644","Type":"ContainerStarted","Data":"f0e3b0b4afc5993e59272502e58bded4f91519d28857443838c4baf706d7f3b1"} Feb 26 22:18:26 crc kubenswrapper[4910]: E0226 22:18:26.972149 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfbe459_2ae5_4d85_9d94_7aeb0c845ead.slice\": RecentStats: unable to find data in memory cache]" Feb 26 22:18:27 crc kubenswrapper[4910]: I0226 22:18:27.572730 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:27 crc kubenswrapper[4910]: I0226 22:18:27.573259 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6fc88b699f-nwbd7" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.321934 4910 generic.go:334] "Generic (PLEG): container finished" podID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerID="503a71679dbd7ced77c8db96b64cd96bb70a28b8ac8be1845a36025b796726eb" exitCode=137 Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.322245 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerDied","Data":"503a71679dbd7ced77c8db96b64cd96bb70a28b8ac8be1845a36025b796726eb"} Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.562234 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.572410 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.599775 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.612783 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.618783 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750501 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z6h5\" (UniqueName: \"kubernetes.io/projected/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-kube-api-access-4z6h5\") pod \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\" (UID: \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750546 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8x9g\" (UniqueName: \"kubernetes.io/projected/28366851-2ead-459f-a372-eb59160e674a-kube-api-access-d8x9g\") pod \"28366851-2ead-459f-a372-eb59160e674a\" (UID: \"28366851-2ead-459f-a372-eb59160e674a\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750623 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28366851-2ead-459f-a372-eb59160e674a-operator-scripts\") pod \"28366851-2ead-459f-a372-eb59160e674a\" (UID: \"28366851-2ead-459f-a372-eb59160e674a\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750664 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/354e979d-961b-468d-bd22-d3779ddf79e7-operator-scripts\") pod \"354e979d-961b-468d-bd22-d3779ddf79e7\" (UID: \"354e979d-961b-468d-bd22-d3779ddf79e7\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750768 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bee5f61-2821-472a-806f-2474c1171a27-operator-scripts\") pod \"3bee5f61-2821-472a-806f-2474c1171a27\" (UID: \"3bee5f61-2821-472a-806f-2474c1171a27\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750799 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szsgb\" (UniqueName: \"kubernetes.io/projected/3bee5f61-2821-472a-806f-2474c1171a27-kube-api-access-szsgb\") pod \"3bee5f61-2821-472a-806f-2474c1171a27\" (UID: \"3bee5f61-2821-472a-806f-2474c1171a27\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750824 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcx9v\" (UniqueName: \"kubernetes.io/projected/354e979d-961b-468d-bd22-d3779ddf79e7-kube-api-access-wcx9v\") pod \"354e979d-961b-468d-bd22-d3779ddf79e7\" (UID: \"354e979d-961b-468d-bd22-d3779ddf79e7\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750883 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-operator-scripts\") pod \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\" (UID: \"0003bc46-cac8-43e2-af3d-8018cfd4ab2d\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750903 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5b19d3-3822-4ad6-bece-dae55bdafa17-operator-scripts\") pod \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\" (UID: \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.750931 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngv7s\" (UniqueName: \"kubernetes.io/projected/ed5b19d3-3822-4ad6-bece-dae55bdafa17-kube-api-access-ngv7s\") pod \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\" (UID: \"ed5b19d3-3822-4ad6-bece-dae55bdafa17\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.752391 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28366851-2ead-459f-a372-eb59160e674a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28366851-2ead-459f-a372-eb59160e674a" (UID: "28366851-2ead-459f-a372-eb59160e674a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.752812 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354e979d-961b-468d-bd22-d3779ddf79e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "354e979d-961b-468d-bd22-d3779ddf79e7" (UID: "354e979d-961b-468d-bd22-d3779ddf79e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.753241 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bee5f61-2821-472a-806f-2474c1171a27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bee5f61-2821-472a-806f-2474c1171a27" (UID: "3bee5f61-2821-472a-806f-2474c1171a27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.754989 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0003bc46-cac8-43e2-af3d-8018cfd4ab2d" (UID: "0003bc46-cac8-43e2-af3d-8018cfd4ab2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.755548 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5b19d3-3822-4ad6-bece-dae55bdafa17-kube-api-access-ngv7s" (OuterVolumeSpecName: "kube-api-access-ngv7s") pod "ed5b19d3-3822-4ad6-bece-dae55bdafa17" (UID: "ed5b19d3-3822-4ad6-bece-dae55bdafa17"). InnerVolumeSpecName "kube-api-access-ngv7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.755751 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28366851-2ead-459f-a372-eb59160e674a-kube-api-access-d8x9g" (OuterVolumeSpecName: "kube-api-access-d8x9g") pod "28366851-2ead-459f-a372-eb59160e674a" (UID: "28366851-2ead-459f-a372-eb59160e674a"). InnerVolumeSpecName "kube-api-access-d8x9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.756129 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5b19d3-3822-4ad6-bece-dae55bdafa17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed5b19d3-3822-4ad6-bece-dae55bdafa17" (UID: "ed5b19d3-3822-4ad6-bece-dae55bdafa17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.757433 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bee5f61-2821-472a-806f-2474c1171a27-kube-api-access-szsgb" (OuterVolumeSpecName: "kube-api-access-szsgb") pod "3bee5f61-2821-472a-806f-2474c1171a27" (UID: "3bee5f61-2821-472a-806f-2474c1171a27"). InnerVolumeSpecName "kube-api-access-szsgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.764220 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354e979d-961b-468d-bd22-d3779ddf79e7-kube-api-access-wcx9v" (OuterVolumeSpecName: "kube-api-access-wcx9v") pod "354e979d-961b-468d-bd22-d3779ddf79e7" (UID: "354e979d-961b-468d-bd22-d3779ddf79e7"). InnerVolumeSpecName "kube-api-access-wcx9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.765231 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-kube-api-access-4z6h5" (OuterVolumeSpecName: "kube-api-access-4z6h5") pod "0003bc46-cac8-43e2-af3d-8018cfd4ab2d" (UID: "0003bc46-cac8-43e2-af3d-8018cfd4ab2d"). InnerVolumeSpecName "kube-api-access-4z6h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.818532 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853393 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853432 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed5b19d3-3822-4ad6-bece-dae55bdafa17-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853446 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngv7s\" (UniqueName: \"kubernetes.io/projected/ed5b19d3-3822-4ad6-bece-dae55bdafa17-kube-api-access-ngv7s\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853458 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z6h5\" (UniqueName: \"kubernetes.io/projected/0003bc46-cac8-43e2-af3d-8018cfd4ab2d-kube-api-access-4z6h5\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853470 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8x9g\" (UniqueName: \"kubernetes.io/projected/28366851-2ead-459f-a372-eb59160e674a-kube-api-access-d8x9g\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853483 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28366851-2ead-459f-a372-eb59160e674a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853494 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/354e979d-961b-468d-bd22-d3779ddf79e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853506 4910 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bee5f61-2821-472a-806f-2474c1171a27-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853517 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szsgb\" (UniqueName: \"kubernetes.io/projected/3bee5f61-2821-472a-806f-2474c1171a27-kube-api-access-szsgb\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.853529 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcx9v\" (UniqueName: \"kubernetes.io/projected/354e979d-961b-468d-bd22-d3779ddf79e7-kube-api-access-wcx9v\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.954800 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-config-data\") pod \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.954880 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-sg-core-conf-yaml\") pod \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.954916 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-combined-ca-bundle\") pod \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.954988 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-scripts\") pod \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.955023 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-log-httpd\") pod \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.955075 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7fkk\" (UniqueName: \"kubernetes.io/projected/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-kube-api-access-r7fkk\") pod \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.955141 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-run-httpd\") pod \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\" (UID: \"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f\") " Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.955794 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" (UID: "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.955794 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" (UID: "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.961375 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-kube-api-access-r7fkk" (OuterVolumeSpecName: "kube-api-access-r7fkk") pod "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" (UID: "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f"). InnerVolumeSpecName "kube-api-access-r7fkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.963245 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-scripts" (OuterVolumeSpecName: "scripts") pod "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" (UID: "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:28 crc kubenswrapper[4910]: I0226 22:18:28.982802 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" (UID: "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.058023 4910 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.058049 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.058058 4910 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.058067 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7fkk\" (UniqueName: \"kubernetes.io/projected/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-kube-api-access-r7fkk\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.058077 4910 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.081445 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" (UID: "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.105392 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-config-data" (OuterVolumeSpecName: "config-data") pod "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" (UID: "1959169a-37cd-4aa3-9cf4-cbbdc99dde4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.160329 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.160363 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.334307 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3f15d8ce-24ee-4d25-a0d8-b9c659220644","Type":"ContainerStarted","Data":"29b428c953c18c02e331748a17125dda90c74da7727fd74d5233c87dc3e6bc01"} Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.335510 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.337042 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8cae-account-create-update-48ftl" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.337040 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8cae-account-create-update-48ftl" event={"ID":"354e979d-961b-468d-bd22-d3779ddf79e7","Type":"ContainerDied","Data":"bdefefdc3f4fe2ed524f0e07359929b08cac65ef01405392e2f6f2ff132c7e38"} Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.337144 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdefefdc3f4fe2ed524f0e07359929b08cac65ef01405392e2f6f2ff132c7e38" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.338869 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6vgtc" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.339802 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6vgtc" event={"ID":"ed5b19d3-3822-4ad6-bece-dae55bdafa17","Type":"ContainerDied","Data":"b4b68963fcb811e0e7efe12064b08063107bd9f28cb57fcaf8a4b6a8f9d2dfcf"} Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.339977 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4b68963fcb811e0e7efe12064b08063107bd9f28cb57fcaf8a4b6a8f9d2dfcf" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.341316 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"71b02cae-fc90-4a97-967d-9a539d5ab671","Type":"ContainerStarted","Data":"4771f000ce7b70d7dca0a973acd539fe42d3fae80fb7803b152147a15ab49355"} Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.344141 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-361a-account-create-update-c844r" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.344170 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-361a-account-create-update-c844r" event={"ID":"3bee5f61-2821-472a-806f-2474c1171a27","Type":"ContainerDied","Data":"e6d16474dfcf917e3e1d0a69d5d81cdf67d1fda088a766375b63269494094151"} Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.344193 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6d16474dfcf917e3e1d0a69d5d81cdf67d1fda088a766375b63269494094151" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.345533 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-09b2-account-create-update-dslwp" event={"ID":"28366851-2ead-459f-a372-eb59160e674a","Type":"ContainerDied","Data":"9b185fce6c4d99c0dd1b4a5ba4b6f24f1b25a7028a76b9053de90b98e993f8c5"} Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.345554 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b185fce6c4d99c0dd1b4a5ba4b6f24f1b25a7028a76b9053de90b98e993f8c5" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.345539 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-09b2-account-create-update-dslwp" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.349205 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1959169a-37cd-4aa3-9cf4-cbbdc99dde4f","Type":"ContainerDied","Data":"d1217d94288a2313f0e7f47672dcd1ee7008213c80c539903f93526ac05c745e"} Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.349264 4910 scope.go:117] "RemoveContainer" containerID="503a71679dbd7ced77c8db96b64cd96bb70a28b8ac8be1845a36025b796726eb" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.349475 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.352239 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7tdc4" event={"ID":"0003bc46-cac8-43e2-af3d-8018cfd4ab2d","Type":"ContainerDied","Data":"cac7687e0bcc996b6ed0053191291ab400ec508fba1962b0f6eae7882153e2c4"} Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.352272 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac7687e0bcc996b6ed0053191291ab400ec508fba1962b0f6eae7882153e2c4" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.352326 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7tdc4" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.363243 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.36322454 podStartE2EDuration="8.36322454s" podCreationTimestamp="2026-02-26 22:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:29.359786256 +0000 UTC m=+1394.439276797" watchObservedRunningTime="2026-02-26 22:18:29.36322454 +0000 UTC m=+1394.442715081" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.383075 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.7345043970000003 podStartE2EDuration="17.38305183s" podCreationTimestamp="2026-02-26 22:18:12 +0000 UTC" firstStartedPulling="2026-02-26 22:18:13.789506955 +0000 UTC m=+1378.868997536" lastFinishedPulling="2026-02-26 22:18:28.438054438 +0000 UTC m=+1393.517544969" observedRunningTime="2026-02-26 22:18:29.376904242 +0000 UTC m=+1394.456394783" watchObservedRunningTime="2026-02-26 22:18:29.38305183 +0000 UTC m=+1394.462542381" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.417982 4910 scope.go:117] "RemoveContainer" containerID="86ba7cfa670f936b2c22be3638236e7de871d9a239017e74755304996f2a06cd" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.444776 4910 scope.go:117] "RemoveContainer" containerID="ee50a00d864892d417a029125914b9b85eb57efea05f62fda9f836ad89dd1c9a" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.459467 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.469243 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.479198 4910 scope.go:117] "RemoveContainer" containerID="03e2c6f282e58c8aadcfe18c580736f090b601212f9feb71ba11569170b03144" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492021 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492569 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0003bc46-cac8-43e2-af3d-8018cfd4ab2d" containerName="mariadb-database-create" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492592 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0003bc46-cac8-43e2-af3d-8018cfd4ab2d" containerName="mariadb-database-create" Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492607 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="sg-core" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492615 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="sg-core" Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492626 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316f1fec-ce34-4d56-81c2-2500efe83251" containerName="mariadb-database-create" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492632 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="316f1fec-ce34-4d56-81c2-2500efe83251" containerName="mariadb-database-create" Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492644 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28366851-2ead-459f-a372-eb59160e674a" containerName="mariadb-account-create-update" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492651 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="28366851-2ead-459f-a372-eb59160e674a" containerName="mariadb-account-create-update" Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492665 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="ceilometer-notification-agent" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492672 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="ceilometer-notification-agent" Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492694 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e979d-961b-468d-bd22-d3779ddf79e7" containerName="mariadb-account-create-update" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492701 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e979d-961b-468d-bd22-d3779ddf79e7" containerName="mariadb-account-create-update" Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492716 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="proxy-httpd" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492724 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="proxy-httpd" Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492737 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bee5f61-2821-472a-806f-2474c1171a27" containerName="mariadb-account-create-update" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492742 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bee5f61-2821-472a-806f-2474c1171a27" containerName="mariadb-account-create-update" Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492752 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5b19d3-3822-4ad6-bece-dae55bdafa17" containerName="mariadb-database-create" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492758 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5b19d3-3822-4ad6-bece-dae55bdafa17" containerName="mariadb-database-create" Feb 26 22:18:29 crc kubenswrapper[4910]: E0226 22:18:29.492766 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="ceilometer-central-agent" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492772 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="ceilometer-central-agent" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492955 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="28366851-2ead-459f-a372-eb59160e674a" containerName="mariadb-account-create-update" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492967 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="proxy-httpd" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492983 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="sg-core" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.492995 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e979d-961b-468d-bd22-d3779ddf79e7" containerName="mariadb-account-create-update" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.493009 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5b19d3-3822-4ad6-bece-dae55bdafa17" containerName="mariadb-database-create" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.493019 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0003bc46-cac8-43e2-af3d-8018cfd4ab2d" containerName="mariadb-database-create" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.493026 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="ceilometer-notification-agent" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.493040 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bee5f61-2821-472a-806f-2474c1171a27" containerName="mariadb-account-create-update" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.493050 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="316f1fec-ce34-4d56-81c2-2500efe83251" containerName="mariadb-database-create" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.493063 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" containerName="ceilometer-central-agent" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.494880 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.504331 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.504496 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.512458 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.622535 4910 scope.go:117] "RemoveContainer" containerID="cb5edc064cd8aa3fa526549c52daaac569cbeda28f84ebe35b3c8ead0dc03fa0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.673435 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss252\" (UniqueName: \"kubernetes.io/projected/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-kube-api-access-ss252\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.673490 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.673523 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-scripts\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.673557 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-log-httpd\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.673590 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-config-data\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.673618 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.673643 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-run-httpd\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.775198 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-run-httpd\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.775330 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss252\" (UniqueName: \"kubernetes.io/projected/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-kube-api-access-ss252\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.775366 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.775393 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-scripts\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.775426 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-log-httpd\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.775462 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-config-data\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.775492 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.775727 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-run-httpd\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.775968 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-log-httpd\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.780630 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-scripts\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.783360 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.783533 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.786877 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-config-data\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.792247 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss252\" (UniqueName: \"kubernetes.io/projected/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-kube-api-access-ss252\") pod \"ceilometer-0\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.827890 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.828954 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:18:29 crc kubenswrapper[4910]: I0226 22:18:29.917367 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1959169a-37cd-4aa3-9cf4-cbbdc99dde4f" path="/var/lib/kubelet/pods/1959169a-37cd-4aa3-9cf4-cbbdc99dde4f/volumes" Feb 26 22:18:30 crc kubenswrapper[4910]: I0226 22:18:30.367784 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:18:30 crc kubenswrapper[4910]: W0226 22:18:30.368565 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3620c738_9c7b_4b1d_9dd6_b369a50ca2d6.slice/crio-a1b56eb8267a4ecfb8033179ddae69910f0be1c44b1fa56c4ecfa50e56acbd08 WatchSource:0}: Error finding container a1b56eb8267a4ecfb8033179ddae69910f0be1c44b1fa56c4ecfa50e56acbd08: Status 404 returned error can't find the container with id a1b56eb8267a4ecfb8033179ddae69910f0be1c44b1fa56c4ecfa50e56acbd08 Feb 26 22:18:31 crc kubenswrapper[4910]: I0226 22:18:31.375247 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerStarted","Data":"c6200fc65099399c9e2980ee7060b9a92ef868870fa431539c6eeaf09716640b"} Feb 26 22:18:31 crc kubenswrapper[4910]: I0226 22:18:31.375692 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerStarted","Data":"a1b56eb8267a4ecfb8033179ddae69910f0be1c44b1fa56c4ecfa50e56acbd08"} Feb 26 22:18:32 crc kubenswrapper[4910]: I0226 22:18:32.390494 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerStarted","Data":"28e1dcf875498bd55f118d7c729af5431f353baaa95795e3d5d8d01aef589cf7"} Feb 26 22:18:32 crc kubenswrapper[4910]: I0226 22:18:32.390741 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerStarted","Data":"2e4900c3a4d8b99e1760178107d649dc1838ef94c73942cbc71c32aee771f5f4"} Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.418865 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerStarted","Data":"043d2fa57bbafaddfa27e561b3840545b6f03243cd45c4ff5289f911cfb9f4ec"} Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.419470 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="ceilometer-central-agent" containerID="cri-o://c6200fc65099399c9e2980ee7060b9a92ef868870fa431539c6eeaf09716640b" gracePeriod=30 Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.419706 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.419985 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="proxy-httpd" containerID="cri-o://043d2fa57bbafaddfa27e561b3840545b6f03243cd45c4ff5289f911cfb9f4ec" gracePeriod=30 Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.420044 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="sg-core" containerID="cri-o://28e1dcf875498bd55f118d7c729af5431f353baaa95795e3d5d8d01aef589cf7" gracePeriod=30 Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.420076 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="ceilometer-notification-agent" containerID="cri-o://2e4900c3a4d8b99e1760178107d649dc1838ef94c73942cbc71c32aee771f5f4" gracePeriod=30 Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.446698 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.757559901 podStartE2EDuration="5.446683013s" podCreationTimestamp="2026-02-26 22:18:29 +0000 UTC" firstStartedPulling="2026-02-26 22:18:30.371175008 +0000 UTC m=+1395.450665559" lastFinishedPulling="2026-02-26 22:18:34.06029813 +0000 UTC m=+1399.139788671" observedRunningTime="2026-02-26 22:18:34.437414931 +0000 UTC m=+1399.516905472" watchObservedRunningTime="2026-02-26 22:18:34.446683013 +0000 UTC m=+1399.526173544" Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.852179 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x9cfd"] Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.853711 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.855658 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.855676 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gw5l8" Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.857121 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.868909 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x9cfd"] Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.994102 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-scripts\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.994312 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.994436 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-config-data\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:34 crc kubenswrapper[4910]: I0226 22:18:34.994497 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbcm\" (UniqueName: \"kubernetes.io/projected/ffed18bb-3818-4692-927b-daa85a2ea2e9-kube-api-access-txbcm\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.095796 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-scripts\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.095910 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.095943 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-config-data\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.095974 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txbcm\" (UniqueName: \"kubernetes.io/projected/ffed18bb-3818-4692-927b-daa85a2ea2e9-kube-api-access-txbcm\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.101898 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.112672 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txbcm\" (UniqueName: \"kubernetes.io/projected/ffed18bb-3818-4692-927b-daa85a2ea2e9-kube-api-access-txbcm\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.112972 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-config-data\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.116996 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-scripts\") pod \"nova-cell0-conductor-db-sync-x9cfd\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.194261 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.447481 4910 generic.go:334] "Generic (PLEG): container finished" podID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerID="28e1dcf875498bd55f118d7c729af5431f353baaa95795e3d5d8d01aef589cf7" exitCode=2 Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.447769 4910 generic.go:334] "Generic (PLEG): container finished" podID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerID="2e4900c3a4d8b99e1760178107d649dc1838ef94c73942cbc71c32aee771f5f4" exitCode=0 Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.447793 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerDied","Data":"28e1dcf875498bd55f118d7c729af5431f353baaa95795e3d5d8d01aef589cf7"} Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.447824 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerDied","Data":"2e4900c3a4d8b99e1760178107d649dc1838ef94c73942cbc71c32aee771f5f4"} Feb 26 22:18:35 crc kubenswrapper[4910]: I0226 22:18:35.709736 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x9cfd"] Feb 26 22:18:35 crc kubenswrapper[4910]: W0226 22:18:35.711509 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffed18bb_3818_4692_927b_daa85a2ea2e9.slice/crio-28225e92bd9da8817d57006c36a731d8b331a7ab86977bc7ab60fb7feb7b8e15 WatchSource:0}: Error finding container 28225e92bd9da8817d57006c36a731d8b331a7ab86977bc7ab60fb7feb7b8e15: Status 404 returned error can't find the container with id 28225e92bd9da8817d57006c36a731d8b331a7ab86977bc7ab60fb7feb7b8e15 Feb 26 22:18:36 crc kubenswrapper[4910]: I0226 22:18:36.458256 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x9cfd" event={"ID":"ffed18bb-3818-4692-927b-daa85a2ea2e9","Type":"ContainerStarted","Data":"28225e92bd9da8817d57006c36a731d8b331a7ab86977bc7ab60fb7feb7b8e15"} Feb 26 22:18:37 crc kubenswrapper[4910]: E0226 22:18:37.211528 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfbe459_2ae5_4d85_9d94_7aeb0c845ead.slice\": RecentStats: unable to find data in memory cache]" Feb 26 22:18:38 crc kubenswrapper[4910]: I0226 22:18:38.628418 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:18:38 crc kubenswrapper[4910]: I0226 22:18:38.629306 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerName="glance-log" containerID="cri-o://5cf6e219d1b3420da9d0a257741e8ce6156dc794b80fabb8f1857d13f5884fb2" gracePeriod=30 Feb 26 22:18:38 crc kubenswrapper[4910]: I0226 22:18:38.629381 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerName="glance-httpd" containerID="cri-o://939f95704fd10da50c6094029ae883c6cf9d0e7232a0425edcb183c06c179e5b" gracePeriod=30 Feb 26 22:18:39 crc kubenswrapper[4910]: I0226 22:18:39.073168 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 26 22:18:39 crc kubenswrapper[4910]: I0226 22:18:39.518321 4910 generic.go:334] "Generic (PLEG): container finished" podID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerID="5cf6e219d1b3420da9d0a257741e8ce6156dc794b80fabb8f1857d13f5884fb2" exitCode=143 Feb 26 22:18:39 crc kubenswrapper[4910]: I0226 22:18:39.518377 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ca6ff2-72d9-4372-93e5-148de7e30e3c","Type":"ContainerDied","Data":"5cf6e219d1b3420da9d0a257741e8ce6156dc794b80fabb8f1857d13f5884fb2"} Feb 26 22:18:40 crc kubenswrapper[4910]: I0226 22:18:40.530351 4910 generic.go:334] "Generic (PLEG): container finished" podID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerID="c6200fc65099399c9e2980ee7060b9a92ef868870fa431539c6eeaf09716640b" exitCode=0 Feb 26 22:18:40 crc kubenswrapper[4910]: I0226 22:18:40.530440 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerDied","Data":"c6200fc65099399c9e2980ee7060b9a92ef868870fa431539c6eeaf09716640b"} Feb 26 22:18:42 crc kubenswrapper[4910]: I0226 22:18:42.551718 4910 generic.go:334] "Generic (PLEG): container finished" podID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerID="939f95704fd10da50c6094029ae883c6cf9d0e7232a0425edcb183c06c179e5b" exitCode=0 Feb 26 22:18:42 crc kubenswrapper[4910]: I0226 22:18:42.551796 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ca6ff2-72d9-4372-93e5-148de7e30e3c","Type":"ContainerDied","Data":"939f95704fd10da50c6094029ae883c6cf9d0e7232a0425edcb183c06c179e5b"} Feb 26 22:18:43 crc kubenswrapper[4910]: I0226 22:18:43.977897 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:43 crc kubenswrapper[4910]: I0226 22:18:43.979192 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c5fdc687b-67zqb" Feb 26 22:18:44 crc kubenswrapper[4910]: I0226 22:18:44.095669 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7b97844b46-5cn8n"] Feb 26 22:18:44 crc kubenswrapper[4910]: I0226 22:18:44.095934 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7b97844b46-5cn8n" podUID="79453bea-3afe-4822-a09c-734dba08b9ef" containerName="placement-log" containerID="cri-o://b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f" gracePeriod=30 Feb 26 22:18:44 crc kubenswrapper[4910]: I0226 22:18:44.096444 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7b97844b46-5cn8n" podUID="79453bea-3afe-4822-a09c-734dba08b9ef" containerName="placement-api" containerID="cri-o://22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d" gracePeriod=30 Feb 26 22:18:44 crc kubenswrapper[4910]: I0226 22:18:44.574004 4910 generic.go:334] "Generic (PLEG): container finished" podID="79453bea-3afe-4822-a09c-734dba08b9ef" containerID="b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f" exitCode=143 Feb 26 22:18:44 crc kubenswrapper[4910]: I0226 22:18:44.574925 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b97844b46-5cn8n" event={"ID":"79453bea-3afe-4822-a09c-734dba08b9ef","Type":"ContainerDied","Data":"b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f"} Feb 26 22:18:46 crc kubenswrapper[4910]: I0226 22:18:46.944128 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.061050 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-httpd-run\") pod \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.061234 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-combined-ca-bundle\") pod \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.061285 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-logs\") pod \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.061355 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-config-data\") pod \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.061495 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-internal-tls-certs\") pod \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.061643 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4295t\" (UniqueName: \"kubernetes.io/projected/81ca6ff2-72d9-4372-93e5-148de7e30e3c-kube-api-access-4295t\") pod \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.061715 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-scripts\") pod \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.061976 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\" (UID: \"81ca6ff2-72d9-4372-93e5-148de7e30e3c\") " Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.061998 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "81ca6ff2-72d9-4372-93e5-148de7e30e3c" (UID: "81ca6ff2-72d9-4372-93e5-148de7e30e3c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.063006 4910 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.064068 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-logs" (OuterVolumeSpecName: "logs") pod "81ca6ff2-72d9-4372-93e5-148de7e30e3c" (UID: "81ca6ff2-72d9-4372-93e5-148de7e30e3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.075276 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-scripts" (OuterVolumeSpecName: "scripts") pod "81ca6ff2-72d9-4372-93e5-148de7e30e3c" (UID: "81ca6ff2-72d9-4372-93e5-148de7e30e3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.087403 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ca6ff2-72d9-4372-93e5-148de7e30e3c-kube-api-access-4295t" (OuterVolumeSpecName: "kube-api-access-4295t") pod "81ca6ff2-72d9-4372-93e5-148de7e30e3c" (UID: "81ca6ff2-72d9-4372-93e5-148de7e30e3c"). InnerVolumeSpecName "kube-api-access-4295t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.105448 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0" (OuterVolumeSpecName: "glance") pod "81ca6ff2-72d9-4372-93e5-148de7e30e3c" (UID: "81ca6ff2-72d9-4372-93e5-148de7e30e3c"). InnerVolumeSpecName "pvc-2ef62248-3f7a-4c99-851b-abb253e36db0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.139651 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ca6ff2-72d9-4372-93e5-148de7e30e3c" (UID: "81ca6ff2-72d9-4372-93e5-148de7e30e3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.166690 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-config-data" (OuterVolumeSpecName: "config-data") pod "81ca6ff2-72d9-4372-93e5-148de7e30e3c" (UID: "81ca6ff2-72d9-4372-93e5-148de7e30e3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.167006 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4295t\" (UniqueName: \"kubernetes.io/projected/81ca6ff2-72d9-4372-93e5-148de7e30e3c-kube-api-access-4295t\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.167032 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.167066 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") on node \"crc\" " Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.167079 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.167092 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ca6ff2-72d9-4372-93e5-148de7e30e3c-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.210353 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "81ca6ff2-72d9-4372-93e5-148de7e30e3c" (UID: "81ca6ff2-72d9-4372-93e5-148de7e30e3c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.268439 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.268469 4910 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ca6ff2-72d9-4372-93e5-148de7e30e3c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.299456 4910 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.299656 4910 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2ef62248-3f7a-4c99-851b-abb253e36db0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0") on node "crc" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.370962 4910 reconciler_common.go:293] "Volume detached for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:47 crc kubenswrapper[4910]: E0226 22:18:47.498730 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfbe459_2ae5_4d85_9d94_7aeb0c845ead.slice\": RecentStats: unable to find data in memory cache]" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.614468 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x9cfd" event={"ID":"ffed18bb-3818-4692-927b-daa85a2ea2e9","Type":"ContainerStarted","Data":"17c62054dea1ce31fdacd4c749d7c20f9a45358321e2cf4cdaaa3d5b34ee0476"} Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.623186 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"81ca6ff2-72d9-4372-93e5-148de7e30e3c","Type":"ContainerDied","Data":"15c6b461083d7b4f2b57886c4f4cdc7a2767e38c8bb43ab98d456df920b230c5"} Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.623239 4910 scope.go:117] "RemoveContainer" containerID="939f95704fd10da50c6094029ae883c6cf9d0e7232a0425edcb183c06c179e5b" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.623396 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.645953 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-x9cfd" podStartSLOduration=2.592344657 podStartE2EDuration="13.645937017s" podCreationTimestamp="2026-02-26 22:18:34 +0000 UTC" firstStartedPulling="2026-02-26 22:18:35.713896359 +0000 UTC m=+1400.793386900" lastFinishedPulling="2026-02-26 22:18:46.767488719 +0000 UTC m=+1411.846979260" observedRunningTime="2026-02-26 22:18:47.629770887 +0000 UTC m=+1412.709261428" watchObservedRunningTime="2026-02-26 22:18:47.645937017 +0000 UTC m=+1412.725427558" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.669900 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.680650 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.686186 4910 scope.go:117] "RemoveContainer" containerID="5cf6e219d1b3420da9d0a257741e8ce6156dc794b80fabb8f1857d13f5884fb2" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.736836 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:18:47 crc kubenswrapper[4910]: E0226 22:18:47.737367 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerName="glance-httpd" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.737385 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerName="glance-httpd" Feb 26 22:18:47 crc kubenswrapper[4910]: E0226 22:18:47.737417 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerName="glance-log" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.737426 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerName="glance-log" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.737643 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerName="glance-log" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.737682 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" containerName="glance-httpd" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.745244 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.748751 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.748971 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.761196 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.885723 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a453142a-b867-453b-9ea3-6ad60d61e47c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.887337 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.887434 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkfgc\" (UniqueName: \"kubernetes.io/projected/a453142a-b867-453b-9ea3-6ad60d61e47c-kube-api-access-xkfgc\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.887458 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a453142a-b867-453b-9ea3-6ad60d61e47c-logs\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.887577 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.887683 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.887835 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.887945 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.949463 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ca6ff2-72d9-4372-93e5-148de7e30e3c" path="/var/lib/kubelet/pods/81ca6ff2-72d9-4372-93e5-148de7e30e3c/volumes" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.989452 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.989509 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.989614 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.989671 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.989708 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a453142a-b867-453b-9ea3-6ad60d61e47c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.989731 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.989771 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkfgc\" (UniqueName: \"kubernetes.io/projected/a453142a-b867-453b-9ea3-6ad60d61e47c-kube-api-access-xkfgc\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.989790 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a453142a-b867-453b-9ea3-6ad60d61e47c-logs\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.990338 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a453142a-b867-453b-9ea3-6ad60d61e47c-logs\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.990755 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a453142a-b867-453b-9ea3-6ad60d61e47c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.995819 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.996489 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.997429 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.997455 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1fe3b589a0972626c208f995164ae337879055052c9895e085608499baca4b3/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:47 crc kubenswrapper[4910]: I0226 22:18:47.997711 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.009237 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a453142a-b867-453b-9ea3-6ad60d61e47c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.010181 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkfgc\" (UniqueName: \"kubernetes.io/projected/a453142a-b867-453b-9ea3-6ad60d61e47c-kube-api-access-xkfgc\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.073459 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ef62248-3f7a-4c99-851b-abb253e36db0\") pod \"glance-default-internal-api-0\" (UID: \"a453142a-b867-453b-9ea3-6ad60d61e47c\") " pod="openstack/glance-default-internal-api-0" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.093075 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.202343 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.297775 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-config-data\") pod \"79453bea-3afe-4822-a09c-734dba08b9ef\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.298786 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-combined-ca-bundle\") pod \"79453bea-3afe-4822-a09c-734dba08b9ef\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.298944 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-public-tls-certs\") pod \"79453bea-3afe-4822-a09c-734dba08b9ef\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.299015 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-internal-tls-certs\") pod \"79453bea-3afe-4822-a09c-734dba08b9ef\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.299074 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79453bea-3afe-4822-a09c-734dba08b9ef-logs\") pod \"79453bea-3afe-4822-a09c-734dba08b9ef\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.299146 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-scripts\") pod \"79453bea-3afe-4822-a09c-734dba08b9ef\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.299227 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncd76\" (UniqueName: \"kubernetes.io/projected/79453bea-3afe-4822-a09c-734dba08b9ef-kube-api-access-ncd76\") pod \"79453bea-3afe-4822-a09c-734dba08b9ef\" (UID: \"79453bea-3afe-4822-a09c-734dba08b9ef\") " Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.303640 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79453bea-3afe-4822-a09c-734dba08b9ef-kube-api-access-ncd76" (OuterVolumeSpecName: "kube-api-access-ncd76") pod "79453bea-3afe-4822-a09c-734dba08b9ef" (UID: "79453bea-3afe-4822-a09c-734dba08b9ef"). InnerVolumeSpecName "kube-api-access-ncd76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.307313 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79453bea-3afe-4822-a09c-734dba08b9ef-logs" (OuterVolumeSpecName: "logs") pod "79453bea-3afe-4822-a09c-734dba08b9ef" (UID: "79453bea-3afe-4822-a09c-734dba08b9ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.313271 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-scripts" (OuterVolumeSpecName: "scripts") pod "79453bea-3afe-4822-a09c-734dba08b9ef" (UID: "79453bea-3afe-4822-a09c-734dba08b9ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.370698 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79453bea-3afe-4822-a09c-734dba08b9ef" (UID: "79453bea-3afe-4822-a09c-734dba08b9ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.373354 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-config-data" (OuterVolumeSpecName: "config-data") pod "79453bea-3afe-4822-a09c-734dba08b9ef" (UID: "79453bea-3afe-4822-a09c-734dba08b9ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.401557 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79453bea-3afe-4822-a09c-734dba08b9ef-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.401578 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.401587 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncd76\" (UniqueName: \"kubernetes.io/projected/79453bea-3afe-4822-a09c-734dba08b9ef-kube-api-access-ncd76\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.401597 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.401606 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.454231 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "79453bea-3afe-4822-a09c-734dba08b9ef" (UID: "79453bea-3afe-4822-a09c-734dba08b9ef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.474357 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "79453bea-3afe-4822-a09c-734dba08b9ef" (UID: "79453bea-3afe-4822-a09c-734dba08b9ef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.503397 4910 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.503614 4910 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79453bea-3afe-4822-a09c-734dba08b9ef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.634188 4910 generic.go:334] "Generic (PLEG): container finished" podID="79453bea-3afe-4822-a09c-734dba08b9ef" containerID="22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d" exitCode=0 Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.634277 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b97844b46-5cn8n" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.634296 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b97844b46-5cn8n" event={"ID":"79453bea-3afe-4822-a09c-734dba08b9ef","Type":"ContainerDied","Data":"22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d"} Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.634830 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b97844b46-5cn8n" event={"ID":"79453bea-3afe-4822-a09c-734dba08b9ef","Type":"ContainerDied","Data":"9f2308679bb29e733d579f3d888ef08a0006567588b0d4cb343f3aecc7472a63"} Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.634858 4910 scope.go:117] "RemoveContainer" containerID="22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.688469 4910 scope.go:117] "RemoveContainer" containerID="b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.713212 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7b97844b46-5cn8n"] Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.733538 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7b97844b46-5cn8n"] Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.777530 4910 scope.go:117] "RemoveContainer" containerID="22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d" Feb 26 22:18:48 crc kubenswrapper[4910]: E0226 22:18:48.779037 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d\": container with ID starting with 22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d not found: ID does not exist" containerID="22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.779076 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d"} err="failed to get container status \"22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d\": rpc error: code = NotFound desc = could not find container \"22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d\": container with ID starting with 22dfb926a01ac6270a40f09d134b64e4d8e94629d9a0c16d2658a85036aaf89d not found: ID does not exist" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.779104 4910 scope.go:117] "RemoveContainer" containerID="b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f" Feb 26 22:18:48 crc kubenswrapper[4910]: E0226 22:18:48.779434 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f\": container with ID starting with b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f not found: ID does not exist" containerID="b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.779461 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f"} err="failed to get container status \"b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f\": rpc error: code = NotFound desc = could not find container \"b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f\": container with ID starting with b3d6d78e3eb4ce67cfa919ce10f4e9c17e4d9fdbf545fc2f3fff7e360f347e2f not found: ID does not exist" Feb 26 22:18:48 crc kubenswrapper[4910]: I0226 22:18:48.785770 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 22:18:48 crc kubenswrapper[4910]: W0226 22:18:48.789082 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda453142a_b867_453b_9ea3_6ad60d61e47c.slice/crio-1e347e2f10a97d84418f00b1ecd4d651776f46eed6a3ed14687bc6964bf14de0 WatchSource:0}: Error finding container 1e347e2f10a97d84418f00b1ecd4d651776f46eed6a3ed14687bc6964bf14de0: Status 404 returned error can't find the container with id 1e347e2f10a97d84418f00b1ecd4d651776f46eed6a3ed14687bc6964bf14de0 Feb 26 22:18:49 crc kubenswrapper[4910]: I0226 22:18:49.487772 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 26 22:18:49 crc kubenswrapper[4910]: I0226 22:18:49.679619 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a453142a-b867-453b-9ea3-6ad60d61e47c","Type":"ContainerStarted","Data":"b5e1435989d976acf8c8c0b018917302c306a7cb6ba82ad872ccb26a41259a9b"} Feb 26 22:18:49 crc kubenswrapper[4910]: I0226 22:18:49.679673 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a453142a-b867-453b-9ea3-6ad60d61e47c","Type":"ContainerStarted","Data":"1e347e2f10a97d84418f00b1ecd4d651776f46eed6a3ed14687bc6964bf14de0"} Feb 26 22:18:49 crc kubenswrapper[4910]: I0226 22:18:49.915873 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79453bea-3afe-4822-a09c-734dba08b9ef" path="/var/lib/kubelet/pods/79453bea-3afe-4822-a09c-734dba08b9ef/volumes" Feb 26 22:18:50 crc kubenswrapper[4910]: I0226 22:18:50.691523 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a453142a-b867-453b-9ea3-6ad60d61e47c","Type":"ContainerStarted","Data":"5094d3c3eccddc3e1299d823aa65d24ff6ce08647bc7ab4ec13a3668065d21eb"} Feb 26 22:18:50 crc kubenswrapper[4910]: I0226 22:18:50.722615 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.722599373 podStartE2EDuration="3.722599373s" podCreationTimestamp="2026-02-26 22:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:50.719283032 +0000 UTC m=+1415.798773583" watchObservedRunningTime="2026-02-26 22:18:50.722599373 +0000 UTC m=+1415.802089914" Feb 26 22:18:51 crc kubenswrapper[4910]: I0226 22:18:51.017936 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:18:51 crc kubenswrapper[4910]: I0226 22:18:51.018259 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerName="glance-log" containerID="cri-o://a9faa3beeb89e54f6198f575b430ba30d8029de2e9e2367d3dfc1f745b400cfc" gracePeriod=30 Feb 26 22:18:51 crc kubenswrapper[4910]: I0226 22:18:51.018427 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerName="glance-httpd" containerID="cri-o://f6d077132a0e46fb64a9ec3b4e3d969c97022d5ab74f6cc64d48f0817755c4b1" gracePeriod=30 Feb 26 22:18:51 crc kubenswrapper[4910]: I0226 22:18:51.720446 4910 generic.go:334] "Generic (PLEG): container finished" podID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerID="a9faa3beeb89e54f6198f575b430ba30d8029de2e9e2367d3dfc1f745b400cfc" exitCode=143 Feb 26 22:18:51 crc kubenswrapper[4910]: I0226 22:18:51.720634 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98d72f18-06a6-49d0-a63b-343d3fea1bb2","Type":"ContainerDied","Data":"a9faa3beeb89e54f6198f575b430ba30d8029de2e9e2367d3dfc1f745b400cfc"} Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.753943 4910 generic.go:334] "Generic (PLEG): container finished" podID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerID="f6d077132a0e46fb64a9ec3b4e3d969c97022d5ab74f6cc64d48f0817755c4b1" exitCode=0 Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.753989 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98d72f18-06a6-49d0-a63b-343d3fea1bb2","Type":"ContainerDied","Data":"f6d077132a0e46fb64a9ec3b4e3d969c97022d5ab74f6cc64d48f0817755c4b1"} Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.754013 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98d72f18-06a6-49d0-a63b-343d3fea1bb2","Type":"ContainerDied","Data":"7635cff7ba2b951aa46a2902ecad3133e11deed4d6aecd0aa18fe72aee9d9a0a"} Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.754023 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7635cff7ba2b951aa46a2902ecad3133e11deed4d6aecd0aa18fe72aee9d9a0a" Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.804926 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.945975 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shtjk\" (UniqueName: \"kubernetes.io/projected/98d72f18-06a6-49d0-a63b-343d3fea1bb2-kube-api-access-shtjk\") pod \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.946263 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-combined-ca-bundle\") pod \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.946476 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.946649 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-scripts\") pod \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.946715 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-config-data\") pod \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.946756 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-public-tls-certs\") pod \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.946862 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-logs\") pod \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.946905 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-httpd-run\") pod \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\" (UID: \"98d72f18-06a6-49d0-a63b-343d3fea1bb2\") " Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.947880 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "98d72f18-06a6-49d0-a63b-343d3fea1bb2" (UID: "98d72f18-06a6-49d0-a63b-343d3fea1bb2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.948217 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-logs" (OuterVolumeSpecName: "logs") pod "98d72f18-06a6-49d0-a63b-343d3fea1bb2" (UID: "98d72f18-06a6-49d0-a63b-343d3fea1bb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.953623 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d72f18-06a6-49d0-a63b-343d3fea1bb2-kube-api-access-shtjk" (OuterVolumeSpecName: "kube-api-access-shtjk") pod "98d72f18-06a6-49d0-a63b-343d3fea1bb2" (UID: "98d72f18-06a6-49d0-a63b-343d3fea1bb2"). InnerVolumeSpecName "kube-api-access-shtjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.960305 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-scripts" (OuterVolumeSpecName: "scripts") pod "98d72f18-06a6-49d0-a63b-343d3fea1bb2" (UID: "98d72f18-06a6-49d0-a63b-343d3fea1bb2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.988755 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e" (OuterVolumeSpecName: "glance") pod "98d72f18-06a6-49d0-a63b-343d3fea1bb2" (UID: "98d72f18-06a6-49d0-a63b-343d3fea1bb2"). InnerVolumeSpecName "pvc-07357e7e-76cc-49df-b0f0-87819efba45e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:18:54 crc kubenswrapper[4910]: I0226 22:18:54.997379 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-config-data" (OuterVolumeSpecName: "config-data") pod "98d72f18-06a6-49d0-a63b-343d3fea1bb2" (UID: "98d72f18-06a6-49d0-a63b-343d3fea1bb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.036297 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "98d72f18-06a6-49d0-a63b-343d3fea1bb2" (UID: "98d72f18-06a6-49d0-a63b-343d3fea1bb2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.036326 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98d72f18-06a6-49d0-a63b-343d3fea1bb2" (UID: "98d72f18-06a6-49d0-a63b-343d3fea1bb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.048865 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.048896 4910 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.048905 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.048914 4910 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98d72f18-06a6-49d0-a63b-343d3fea1bb2-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.048923 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shtjk\" (UniqueName: \"kubernetes.io/projected/98d72f18-06a6-49d0-a63b-343d3fea1bb2-kube-api-access-shtjk\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.048932 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.048952 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") on node \"crc\" " Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.048961 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d72f18-06a6-49d0-a63b-343d3fea1bb2-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.073140 4910 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.073450 4910 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-07357e7e-76cc-49df-b0f0-87819efba45e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e") on node "crc" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.150537 4910 reconciler_common.go:293] "Volume detached for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") on node \"crc\" DevicePath \"\"" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.762596 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.797432 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.810641 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.829346 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:18:55 crc kubenswrapper[4910]: E0226 22:18:55.829868 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerName="glance-log" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.829889 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerName="glance-log" Feb 26 22:18:55 crc kubenswrapper[4910]: E0226 22:18:55.829917 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79453bea-3afe-4822-a09c-734dba08b9ef" containerName="placement-log" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.829927 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="79453bea-3afe-4822-a09c-734dba08b9ef" containerName="placement-log" Feb 26 22:18:55 crc kubenswrapper[4910]: E0226 22:18:55.829965 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerName="glance-httpd" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.829973 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerName="glance-httpd" Feb 26 22:18:55 crc kubenswrapper[4910]: E0226 22:18:55.829992 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79453bea-3afe-4822-a09c-734dba08b9ef" containerName="placement-api" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.829999 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="79453bea-3afe-4822-a09c-734dba08b9ef" containerName="placement-api" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.830284 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="79453bea-3afe-4822-a09c-734dba08b9ef" containerName="placement-api" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.830307 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerName="glance-httpd" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.830325 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="79453bea-3afe-4822-a09c-734dba08b9ef" containerName="placement-log" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.830337 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" containerName="glance-log" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.831654 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.834850 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.835858 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.841988 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.915062 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d72f18-06a6-49d0-a63b-343d3fea1bb2" path="/var/lib/kubelet/pods/98d72f18-06a6-49d0-a63b-343d3fea1bb2/volumes" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.964779 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76hvc\" (UniqueName: \"kubernetes.io/projected/cd55a983-41e8-4575-a9a4-8a57e93b8816-kube-api-access-76hvc\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.964836 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd55a983-41e8-4575-a9a4-8a57e93b8816-logs\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.964860 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.965015 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.965230 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.965314 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.965407 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:55 crc kubenswrapper[4910]: I0226 22:18:55.965668 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd55a983-41e8-4575-a9a4-8a57e93b8816-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.067222 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.067354 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.067393 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.067439 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.067537 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd55a983-41e8-4575-a9a4-8a57e93b8816-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.067614 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76hvc\" (UniqueName: \"kubernetes.io/projected/cd55a983-41e8-4575-a9a4-8a57e93b8816-kube-api-access-76hvc\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.067639 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.067672 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd55a983-41e8-4575-a9a4-8a57e93b8816-logs\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.068152 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd55a983-41e8-4575-a9a4-8a57e93b8816-logs\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.068516 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd55a983-41e8-4575-a9a4-8a57e93b8816-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.070791 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.072618 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.076469 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.076990 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd55a983-41e8-4575-a9a4-8a57e93b8816-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.091272 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.091312 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94049027790e22a18bf8e430a446734369cbf41eb4d29ad3f70f496aca7abf57/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.094029 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76hvc\" (UniqueName: \"kubernetes.io/projected/cd55a983-41e8-4575-a9a4-8a57e93b8816-kube-api-access-76hvc\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.147598 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-07357e7e-76cc-49df-b0f0-87819efba45e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-07357e7e-76cc-49df-b0f0-87819efba45e\") pod \"glance-default-external-api-0\" (UID: \"cd55a983-41e8-4575-a9a4-8a57e93b8816\") " pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.151290 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 22:18:56 crc kubenswrapper[4910]: I0226 22:18:56.857489 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 22:18:57 crc kubenswrapper[4910]: E0226 22:18:57.766652 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfbe459_2ae5_4d85_9d94_7aeb0c845ead.slice\": RecentStats: unable to find data in memory cache]" Feb 26 22:18:57 crc kubenswrapper[4910]: I0226 22:18:57.793859 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd55a983-41e8-4575-a9a4-8a57e93b8816","Type":"ContainerStarted","Data":"6da9849639d1071f5a12ab968fdec06c2a4b059aa2782c830bbfe1707fa7bc30"} Feb 26 22:18:57 crc kubenswrapper[4910]: I0226 22:18:57.793916 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd55a983-41e8-4575-a9a4-8a57e93b8816","Type":"ContainerStarted","Data":"a94f06a7f9ddd43f3e6b9a9260188be0241f708de965a0ac068a6c98bfef4d8d"} Feb 26 22:18:58 crc kubenswrapper[4910]: I0226 22:18:58.093648 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:58 crc kubenswrapper[4910]: I0226 22:18:58.093955 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:58 crc kubenswrapper[4910]: I0226 22:18:58.132340 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:58 crc kubenswrapper[4910]: I0226 22:18:58.132761 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:58 crc kubenswrapper[4910]: I0226 22:18:58.812240 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd55a983-41e8-4575-a9a4-8a57e93b8816","Type":"ContainerStarted","Data":"a6d02470d4cb6c6545de6af7c0d903ad845ad7f5cf781cf1da278beefc493ebd"} Feb 26 22:18:58 crc kubenswrapper[4910]: I0226 22:18:58.812925 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:58 crc kubenswrapper[4910]: I0226 22:18:58.812960 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 22:18:59 crc kubenswrapper[4910]: I0226 22:18:59.837733 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 22:19:00 crc kubenswrapper[4910]: I0226 22:19:00.675944 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 22:19:00 crc kubenswrapper[4910]: I0226 22:19:00.692131 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 22:19:00 crc kubenswrapper[4910]: I0226 22:19:00.702172 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.702142952 podStartE2EDuration="5.702142952s" podCreationTimestamp="2026-02-26 22:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:18:58.846977957 +0000 UTC m=+1423.926468548" watchObservedRunningTime="2026-02-26 22:19:00.702142952 +0000 UTC m=+1425.781633493" Feb 26 22:19:00 crc kubenswrapper[4910]: I0226 22:19:00.839662 4910 generic.go:334] "Generic (PLEG): container finished" podID="ffed18bb-3818-4692-927b-daa85a2ea2e9" containerID="17c62054dea1ce31fdacd4c749d7c20f9a45358321e2cf4cdaaa3d5b34ee0476" exitCode=0 Feb 26 22:19:00 crc kubenswrapper[4910]: I0226 22:19:00.839773 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x9cfd" event={"ID":"ffed18bb-3818-4692-927b-daa85a2ea2e9","Type":"ContainerDied","Data":"17c62054dea1ce31fdacd4c749d7c20f9a45358321e2cf4cdaaa3d5b34ee0476"} Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.373666 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.501381 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-combined-ca-bundle\") pod \"ffed18bb-3818-4692-927b-daa85a2ea2e9\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.501452 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txbcm\" (UniqueName: \"kubernetes.io/projected/ffed18bb-3818-4692-927b-daa85a2ea2e9-kube-api-access-txbcm\") pod \"ffed18bb-3818-4692-927b-daa85a2ea2e9\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.501510 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-config-data\") pod \"ffed18bb-3818-4692-927b-daa85a2ea2e9\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.501590 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-scripts\") pod \"ffed18bb-3818-4692-927b-daa85a2ea2e9\" (UID: \"ffed18bb-3818-4692-927b-daa85a2ea2e9\") " Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.506908 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-scripts" (OuterVolumeSpecName: "scripts") pod "ffed18bb-3818-4692-927b-daa85a2ea2e9" (UID: "ffed18bb-3818-4692-927b-daa85a2ea2e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.511395 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffed18bb-3818-4692-927b-daa85a2ea2e9-kube-api-access-txbcm" (OuterVolumeSpecName: "kube-api-access-txbcm") pod "ffed18bb-3818-4692-927b-daa85a2ea2e9" (UID: "ffed18bb-3818-4692-927b-daa85a2ea2e9"). InnerVolumeSpecName "kube-api-access-txbcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.546084 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffed18bb-3818-4692-927b-daa85a2ea2e9" (UID: "ffed18bb-3818-4692-927b-daa85a2ea2e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.550614 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-config-data" (OuterVolumeSpecName: "config-data") pod "ffed18bb-3818-4692-927b-daa85a2ea2e9" (UID: "ffed18bb-3818-4692-927b-daa85a2ea2e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.605320 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.605436 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txbcm\" (UniqueName: \"kubernetes.io/projected/ffed18bb-3818-4692-927b-daa85a2ea2e9-kube-api-access-txbcm\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.605450 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.605461 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffed18bb-3818-4692-927b-daa85a2ea2e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.864586 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x9cfd" event={"ID":"ffed18bb-3818-4692-927b-daa85a2ea2e9","Type":"ContainerDied","Data":"28225e92bd9da8817d57006c36a731d8b331a7ab86977bc7ab60fb7feb7b8e15"} Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.864628 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28225e92bd9da8817d57006c36a731d8b331a7ab86977bc7ab60fb7feb7b8e15" Feb 26 22:19:02 crc kubenswrapper[4910]: I0226 22:19:02.864639 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x9cfd" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.050789 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 22:19:03 crc kubenswrapper[4910]: E0226 22:19:03.051243 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffed18bb-3818-4692-927b-daa85a2ea2e9" containerName="nova-cell0-conductor-db-sync" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.051259 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffed18bb-3818-4692-927b-daa85a2ea2e9" containerName="nova-cell0-conductor-db-sync" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.051442 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffed18bb-3818-4692-927b-daa85a2ea2e9" containerName="nova-cell0-conductor-db-sync" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.052122 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.054890 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gw5l8" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.056935 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.100582 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.217348 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.217583 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npgp4\" (UniqueName: \"kubernetes.io/projected/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-kube-api-access-npgp4\") pod \"nova-cell0-conductor-0\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.217711 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.320072 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.320198 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npgp4\" (UniqueName: \"kubernetes.io/projected/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-kube-api-access-npgp4\") pod \"nova-cell0-conductor-0\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.320245 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.326389 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.326542 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.336811 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npgp4\" (UniqueName: \"kubernetes.io/projected/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-kube-api-access-npgp4\") pod \"nova-cell0-conductor-0\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.371968 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:03 crc kubenswrapper[4910]: I0226 22:19:03.866817 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 22:19:04 crc kubenswrapper[4910]: I0226 22:19:04.885874 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef","Type":"ContainerStarted","Data":"86b195322b165f23f8762fba209eda7139abcc93e056d7b41f723b5102f97a8e"} Feb 26 22:19:04 crc kubenswrapper[4910]: I0226 22:19:04.886218 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef","Type":"ContainerStarted","Data":"57bf6fc9bdd25da542d5715585bef74fbdf5b2d7e3757bbc17f572c6ee93d7b4"} Feb 26 22:19:04 crc kubenswrapper[4910]: I0226 22:19:04.888525 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:04 crc kubenswrapper[4910]: I0226 22:19:04.889943 4910 generic.go:334] "Generic (PLEG): container finished" podID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerID="043d2fa57bbafaddfa27e561b3840545b6f03243cd45c4ff5289f911cfb9f4ec" exitCode=137 Feb 26 22:19:04 crc kubenswrapper[4910]: I0226 22:19:04.889982 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerDied","Data":"043d2fa57bbafaddfa27e561b3840545b6f03243cd45c4ff5289f911cfb9f4ec"} Feb 26 22:19:04 crc kubenswrapper[4910]: I0226 22:19:04.906553 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.906536351 podStartE2EDuration="1.906536351s" podCreationTimestamp="2026-02-26 22:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:04.904814464 +0000 UTC m=+1429.984305005" watchObservedRunningTime="2026-02-26 22:19:04.906536351 +0000 UTC m=+1429.986026892" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.003422 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.057369 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-log-httpd\") pod \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.057573 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-sg-core-conf-yaml\") pod \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.057651 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-run-httpd\") pod \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.057673 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-scripts\") pod \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.057718 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-config-data\") pod \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.057747 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-combined-ca-bundle\") pod \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.057805 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" (UID: "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.057815 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss252\" (UniqueName: \"kubernetes.io/projected/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-kube-api-access-ss252\") pod \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\" (UID: \"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6\") " Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.058264 4910 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.058625 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" (UID: "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.077313 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-kube-api-access-ss252" (OuterVolumeSpecName: "kube-api-access-ss252") pod "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" (UID: "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6"). InnerVolumeSpecName "kube-api-access-ss252". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.080315 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-scripts" (OuterVolumeSpecName: "scripts") pod "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" (UID: "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.102345 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" (UID: "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.140139 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" (UID: "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.160118 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss252\" (UniqueName: \"kubernetes.io/projected/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-kube-api-access-ss252\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.160174 4910 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.160189 4910 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.160200 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.160210 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.173704 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-config-data" (OuterVolumeSpecName: "config-data") pod "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" (UID: "3620c738-9c7b-4b1d-9dd6-b369a50ca2d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.263382 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.909534 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.921515 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3620c738-9c7b-4b1d-9dd6-b369a50ca2d6","Type":"ContainerDied","Data":"a1b56eb8267a4ecfb8033179ddae69910f0be1c44b1fa56c4ecfa50e56acbd08"} Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.921584 4910 scope.go:117] "RemoveContainer" containerID="043d2fa57bbafaddfa27e561b3840545b6f03243cd45c4ff5289f911cfb9f4ec" Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.964209 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.985785 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:05 crc kubenswrapper[4910]: I0226 22:19:05.993755 4910 scope.go:117] "RemoveContainer" containerID="28e1dcf875498bd55f118d7c729af5431f353baaa95795e3d5d8d01aef589cf7" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.002575 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:06 crc kubenswrapper[4910]: E0226 22:19:06.003132 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="sg-core" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.003171 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="sg-core" Feb 26 22:19:06 crc kubenswrapper[4910]: E0226 22:19:06.003192 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="ceilometer-notification-agent" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.003200 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="ceilometer-notification-agent" Feb 26 22:19:06 crc kubenswrapper[4910]: E0226 22:19:06.003227 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="proxy-httpd" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.003236 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="proxy-httpd" Feb 26 22:19:06 crc kubenswrapper[4910]: E0226 22:19:06.003252 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="ceilometer-central-agent" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.003262 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="ceilometer-central-agent" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.003538 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="ceilometer-central-agent" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.003560 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="ceilometer-notification-agent" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.003580 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="sg-core" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.003596 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" containerName="proxy-httpd" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.005875 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.008120 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.008271 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.011016 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.029058 4910 scope.go:117] "RemoveContainer" containerID="2e4900c3a4d8b99e1760178107d649dc1838ef94c73942cbc71c32aee771f5f4" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.052507 4910 scope.go:117] "RemoveContainer" containerID="c6200fc65099399c9e2980ee7060b9a92ef868870fa431539c6eeaf09716640b" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.152147 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.152456 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.180802 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-config-data\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.180887 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.181015 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.181045 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-run-httpd\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.181067 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-scripts\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.181117 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb2nx\" (UniqueName: \"kubernetes.io/projected/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-kube-api-access-lb2nx\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.181410 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-log-httpd\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.186944 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.202550 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.283632 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-log-httpd\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.283734 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-config-data\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.283791 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.283921 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.283957 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-run-httpd\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.283990 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-scripts\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.284031 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb2nx\" (UniqueName: \"kubernetes.io/projected/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-kube-api-access-lb2nx\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.285879 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-run-httpd\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.286355 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-log-httpd\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.289201 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.295366 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-scripts\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.295661 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-config-data\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.300532 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.314517 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb2nx\" (UniqueName: \"kubernetes.io/projected/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-kube-api-access-lb2nx\") pod \"ceilometer-0\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.326127 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.811782 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.918469 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerStarted","Data":"a3ca2142c86eee5a861bd04ed671df81f465c10b5f71dbdd27e6b9cdc1a169de"} Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.921625 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 22:19:06 crc kubenswrapper[4910]: I0226 22:19:06.921659 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 22:19:07 crc kubenswrapper[4910]: I0226 22:19:07.452904 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 22:19:07 crc kubenswrapper[4910]: I0226 22:19:07.915581 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3620c738-9c7b-4b1d-9dd6-b369a50ca2d6" path="/var/lib/kubelet/pods/3620c738-9c7b-4b1d-9dd6-b369a50ca2d6/volumes" Feb 26 22:19:07 crc kubenswrapper[4910]: I0226 22:19:07.934560 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef" containerName="nova-cell0-conductor-conductor" containerID="cri-o://86b195322b165f23f8762fba209eda7139abcc93e056d7b41f723b5102f97a8e" gracePeriod=30 Feb 26 22:19:07 crc kubenswrapper[4910]: I0226 22:19:07.934885 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerStarted","Data":"cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc"} Feb 26 22:19:07 crc kubenswrapper[4910]: E0226 22:19:07.998547 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfbe459_2ae5_4d85_9d94_7aeb0c845ead.slice\": RecentStats: unable to find data in memory cache]" Feb 26 22:19:08 crc kubenswrapper[4910]: I0226 22:19:08.043961 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:08 crc kubenswrapper[4910]: I0226 22:19:08.954798 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerStarted","Data":"25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff"} Feb 26 22:19:08 crc kubenswrapper[4910]: I0226 22:19:08.954819 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 22:19:08 crc kubenswrapper[4910]: I0226 22:19:08.955150 4910 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 22:19:08 crc kubenswrapper[4910]: I0226 22:19:08.995259 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 22:19:08 crc kubenswrapper[4910]: I0226 22:19:08.995702 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 22:19:09 crc kubenswrapper[4910]: I0226 22:19:09.965249 4910 generic.go:334] "Generic (PLEG): container finished" podID="4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef" containerID="86b195322b165f23f8762fba209eda7139abcc93e056d7b41f723b5102f97a8e" exitCode=0 Feb 26 22:19:09 crc kubenswrapper[4910]: I0226 22:19:09.965440 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef","Type":"ContainerDied","Data":"86b195322b165f23f8762fba209eda7139abcc93e056d7b41f723b5102f97a8e"} Feb 26 22:19:10 crc kubenswrapper[4910]: I0226 22:19:10.813253 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:10 crc kubenswrapper[4910]: I0226 22:19:10.989762 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-config-data\") pod \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " Feb 26 22:19:10 crc kubenswrapper[4910]: I0226 22:19:10.989853 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npgp4\" (UniqueName: \"kubernetes.io/projected/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-kube-api-access-npgp4\") pod \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " Feb 26 22:19:10 crc kubenswrapper[4910]: I0226 22:19:10.990275 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-combined-ca-bundle\") pod \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\" (UID: \"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef\") " Feb 26 22:19:10 crc kubenswrapper[4910]: I0226 22:19:10.996873 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerStarted","Data":"e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b"} Feb 26 22:19:10 crc kubenswrapper[4910]: I0226 22:19:10.997178 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-kube-api-access-npgp4" (OuterVolumeSpecName: "kube-api-access-npgp4") pod "4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef" (UID: "4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef"). InnerVolumeSpecName "kube-api-access-npgp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:10 crc kubenswrapper[4910]: I0226 22:19:10.999074 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef","Type":"ContainerDied","Data":"57bf6fc9bdd25da542d5715585bef74fbdf5b2d7e3757bbc17f572c6ee93d7b4"} Feb 26 22:19:10 crc kubenswrapper[4910]: I0226 22:19:10.999092 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:10 crc kubenswrapper[4910]: I0226 22:19:10.999212 4910 scope.go:117] "RemoveContainer" containerID="86b195322b165f23f8762fba209eda7139abcc93e056d7b41f723b5102f97a8e" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.016952 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-config-data" (OuterVolumeSpecName: "config-data") pod "4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef" (UID: "4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.028550 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef" (UID: "4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.092328 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.092352 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.092363 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npgp4\" (UniqueName: \"kubernetes.io/projected/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef-kube-api-access-npgp4\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.349125 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.360141 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.372304 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 22:19:11 crc kubenswrapper[4910]: E0226 22:19:11.372907 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef" containerName="nova-cell0-conductor-conductor" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.372932 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef" containerName="nova-cell0-conductor-conductor" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.373242 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef" containerName="nova-cell0-conductor-conductor" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.374146 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.385464 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.385511 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gw5l8" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.394191 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.503289 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.503355 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.503439 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n958\" (UniqueName: \"kubernetes.io/projected/ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b-kube-api-access-9n958\") pod \"nova-cell0-conductor-0\" (UID: \"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.606285 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.606393 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.606498 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n958\" (UniqueName: \"kubernetes.io/projected/ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b-kube-api-access-9n958\") pod \"nova-cell0-conductor-0\" (UID: \"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.611501 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.619586 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.624470 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n958\" (UniqueName: \"kubernetes.io/projected/ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b-kube-api-access-9n958\") pod \"nova-cell0-conductor-0\" (UID: \"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.711020 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:11 crc kubenswrapper[4910]: I0226 22:19:11.946211 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef" path="/var/lib/kubelet/pods/4c48d54f-d3a5-4c5b-9979-e1a5b5b446ef/volumes" Feb 26 22:19:12 crc kubenswrapper[4910]: W0226 22:19:12.351057 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab1e5b7b_4ac0_4e1d_9a7a_d8312a973e0b.slice/crio-6a259aff54f1ed24816dfd06afb3c0da1142939c38becc8f27f915d5a113f553 WatchSource:0}: Error finding container 6a259aff54f1ed24816dfd06afb3c0da1142939c38becc8f27f915d5a113f553: Status 404 returned error can't find the container with id 6a259aff54f1ed24816dfd06afb3c0da1142939c38becc8f27f915d5a113f553 Feb 26 22:19:12 crc kubenswrapper[4910]: I0226 22:19:12.355877 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.086226 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b","Type":"ContainerStarted","Data":"74164516dd16e26b20d24df8b3112aa2114d01fc8cf8773e201c655a91df457d"} Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.086273 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b","Type":"ContainerStarted","Data":"6a259aff54f1ed24816dfd06afb3c0da1142939c38becc8f27f915d5a113f553"} Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.087599 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.090137 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerStarted","Data":"8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431"} Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.090307 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="ceilometer-central-agent" containerID="cri-o://cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc" gracePeriod=30 Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.090591 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.090648 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="proxy-httpd" containerID="cri-o://8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431" gracePeriod=30 Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.090702 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="sg-core" containerID="cri-o://e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b" gracePeriod=30 Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.090744 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="ceilometer-notification-agent" containerID="cri-o://25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff" gracePeriod=30 Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.114714 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.114693939 podStartE2EDuration="2.114693939s" podCreationTimestamp="2026-02-26 22:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:13.112788337 +0000 UTC m=+1438.192278888" watchObservedRunningTime="2026-02-26 22:19:13.114693939 +0000 UTC m=+1438.194184490" Feb 26 22:19:13 crc kubenswrapper[4910]: I0226 22:19:13.167482 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.793371441 podStartE2EDuration="8.167460937s" podCreationTimestamp="2026-02-26 22:19:05 +0000 UTC" firstStartedPulling="2026-02-26 22:19:06.810451335 +0000 UTC m=+1431.889941876" lastFinishedPulling="2026-02-26 22:19:12.184540831 +0000 UTC m=+1437.264031372" observedRunningTime="2026-02-26 22:19:13.154890605 +0000 UTC m=+1438.234381226" watchObservedRunningTime="2026-02-26 22:19:13.167460937 +0000 UTC m=+1438.246951478" Feb 26 22:19:14 crc kubenswrapper[4910]: I0226 22:19:14.106841 4910 generic.go:334] "Generic (PLEG): container finished" podID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerID="8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431" exitCode=0 Feb 26 22:19:14 crc kubenswrapper[4910]: I0226 22:19:14.107086 4910 generic.go:334] "Generic (PLEG): container finished" podID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerID="e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b" exitCode=2 Feb 26 22:19:14 crc kubenswrapper[4910]: I0226 22:19:14.107095 4910 generic.go:334] "Generic (PLEG): container finished" podID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerID="25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff" exitCode=0 Feb 26 22:19:14 crc kubenswrapper[4910]: I0226 22:19:14.107930 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerDied","Data":"8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431"} Feb 26 22:19:14 crc kubenswrapper[4910]: I0226 22:19:14.107955 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerDied","Data":"e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b"} Feb 26 22:19:14 crc kubenswrapper[4910]: I0226 22:19:14.107965 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerDied","Data":"25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff"} Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.029615 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.152334 4910 generic.go:334] "Generic (PLEG): container finished" podID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerID="cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc" exitCode=0 Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.152395 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerDied","Data":"cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc"} Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.152617 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ff39f2-d094-48dc-90d6-4bc6b65be8bd","Type":"ContainerDied","Data":"a3ca2142c86eee5a861bd04ed671df81f465c10b5f71dbdd27e6b9cdc1a169de"} Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.152639 4910 scope.go:117] "RemoveContainer" containerID="8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.152432 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.173045 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-log-httpd\") pod \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.173143 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-run-httpd\") pod \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.173317 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-combined-ca-bundle\") pod \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.173538 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb2nx\" (UniqueName: \"kubernetes.io/projected/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-kube-api-access-lb2nx\") pod \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.173584 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-config-data\") pod \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.173612 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-scripts\") pod \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.173635 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-sg-core-conf-yaml\") pod \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\" (UID: \"28ff39f2-d094-48dc-90d6-4bc6b65be8bd\") " Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.177777 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28ff39f2-d094-48dc-90d6-4bc6b65be8bd" (UID: "28ff39f2-d094-48dc-90d6-4bc6b65be8bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.182384 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28ff39f2-d094-48dc-90d6-4bc6b65be8bd" (UID: "28ff39f2-d094-48dc-90d6-4bc6b65be8bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.187658 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-kube-api-access-lb2nx" (OuterVolumeSpecName: "kube-api-access-lb2nx") pod "28ff39f2-d094-48dc-90d6-4bc6b65be8bd" (UID: "28ff39f2-d094-48dc-90d6-4bc6b65be8bd"). InnerVolumeSpecName "kube-api-access-lb2nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.197010 4910 scope.go:117] "RemoveContainer" containerID="e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.198745 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-scripts" (OuterVolumeSpecName: "scripts") pod "28ff39f2-d094-48dc-90d6-4bc6b65be8bd" (UID: "28ff39f2-d094-48dc-90d6-4bc6b65be8bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.212614 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28ff39f2-d094-48dc-90d6-4bc6b65be8bd" (UID: "28ff39f2-d094-48dc-90d6-4bc6b65be8bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.270409 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28ff39f2-d094-48dc-90d6-4bc6b65be8bd" (UID: "28ff39f2-d094-48dc-90d6-4bc6b65be8bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.276352 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb2nx\" (UniqueName: \"kubernetes.io/projected/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-kube-api-access-lb2nx\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.276375 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.276384 4910 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.276392 4910 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.276402 4910 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.276409 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.287998 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-config-data" (OuterVolumeSpecName: "config-data") pod "28ff39f2-d094-48dc-90d6-4bc6b65be8bd" (UID: "28ff39f2-d094-48dc-90d6-4bc6b65be8bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.299564 4910 scope.go:117] "RemoveContainer" containerID="25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.319849 4910 scope.go:117] "RemoveContainer" containerID="cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.342205 4910 scope.go:117] "RemoveContainer" containerID="8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431" Feb 26 22:19:18 crc kubenswrapper[4910]: E0226 22:19:18.342585 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431\": container with ID starting with 8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431 not found: ID does not exist" containerID="8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.342620 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431"} err="failed to get container status \"8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431\": rpc error: code = NotFound desc = could not find container \"8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431\": container with ID starting with 8b37f3a59f665b496a9cbe78440cccaaf4da630f15722cfd9c1bcd762a08c431 not found: ID does not exist" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.342644 4910 scope.go:117] "RemoveContainer" containerID="e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b" Feb 26 22:19:18 crc kubenswrapper[4910]: E0226 22:19:18.342973 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b\": container with ID starting with e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b not found: ID does not exist" containerID="e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.343012 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b"} err="failed to get container status \"e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b\": rpc error: code = NotFound desc = could not find container \"e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b\": container with ID starting with e640cb9be090bb9c20094db0b18fc2e68b222131fc74c6f77f3f1441cffa7c3b not found: ID does not exist" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.343036 4910 scope.go:117] "RemoveContainer" containerID="25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff" Feb 26 22:19:18 crc kubenswrapper[4910]: E0226 22:19:18.343408 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff\": container with ID starting with 25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff not found: ID does not exist" containerID="25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.343423 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff"} err="failed to get container status \"25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff\": rpc error: code = NotFound desc = could not find container \"25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff\": container with ID starting with 25519e20d68ebc4ce09a0bc3541700b9c8016c634dfabfec5e63e09a603e96ff not found: ID does not exist" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.343456 4910 scope.go:117] "RemoveContainer" containerID="cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc" Feb 26 22:19:18 crc kubenswrapper[4910]: E0226 22:19:18.343696 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc\": container with ID starting with cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc not found: ID does not exist" containerID="cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.343718 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc"} err="failed to get container status \"cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc\": rpc error: code = NotFound desc = could not find container \"cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc\": container with ID starting with cb3a3fc05fd89fd09cc9f02de46069794d65813bc2a5affd09adcc11e5fb39cc not found: ID does not exist" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.379034 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ff39f2-d094-48dc-90d6-4bc6b65be8bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.489371 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.499821 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.525235 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:18 crc kubenswrapper[4910]: E0226 22:19:18.525683 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="sg-core" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.525701 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="sg-core" Feb 26 22:19:18 crc kubenswrapper[4910]: E0226 22:19:18.525715 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="proxy-httpd" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.525723 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="proxy-httpd" Feb 26 22:19:18 crc kubenswrapper[4910]: E0226 22:19:18.525743 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="ceilometer-central-agent" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.525751 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="ceilometer-central-agent" Feb 26 22:19:18 crc kubenswrapper[4910]: E0226 22:19:18.525780 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="ceilometer-notification-agent" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.525789 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="ceilometer-notification-agent" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.525974 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="ceilometer-notification-agent" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.525996 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="sg-core" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.526012 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="ceilometer-central-agent" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.526056 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" containerName="proxy-httpd" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.527903 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.530228 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.530450 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.535489 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.688494 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-config-data\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.689373 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.689579 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-scripts\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.689785 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-log-httpd\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.689863 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjmtp\" (UniqueName: \"kubernetes.io/projected/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-kube-api-access-sjmtp\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.690026 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.690061 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-run-httpd\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.791808 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-config-data\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.792106 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.792296 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-scripts\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.792482 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-log-httpd\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.792601 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjmtp\" (UniqueName: \"kubernetes.io/projected/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-kube-api-access-sjmtp\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.793295 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-log-httpd\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.793567 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.793680 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-run-httpd\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.794306 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-run-httpd\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.797093 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-scripts\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.799109 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-config-data\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.801575 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.808788 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.821861 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjmtp\" (UniqueName: \"kubernetes.io/projected/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-kube-api-access-sjmtp\") pod \"ceilometer-0\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " pod="openstack/ceilometer-0" Feb 26 22:19:18 crc kubenswrapper[4910]: I0226 22:19:18.879893 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:19:19 crc kubenswrapper[4910]: W0226 22:19:19.394237 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fcb104e_6cec_4785_97a5_4afbbc1ff73b.slice/crio-fc15eca9c074ad632f0ca06cc16afc8b4ef0b89307ca823ac4a0cb593677a12a WatchSource:0}: Error finding container fc15eca9c074ad632f0ca06cc16afc8b4ef0b89307ca823ac4a0cb593677a12a: Status 404 returned error can't find the container with id fc15eca9c074ad632f0ca06cc16afc8b4ef0b89307ca823ac4a0cb593677a12a Feb 26 22:19:19 crc kubenswrapper[4910]: I0226 22:19:19.394482 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:19 crc kubenswrapper[4910]: I0226 22:19:19.944979 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ff39f2-d094-48dc-90d6-4bc6b65be8bd" path="/var/lib/kubelet/pods/28ff39f2-d094-48dc-90d6-4bc6b65be8bd/volumes" Feb 26 22:19:20 crc kubenswrapper[4910]: I0226 22:19:20.173714 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerStarted","Data":"fc15eca9c074ad632f0ca06cc16afc8b4ef0b89307ca823ac4a0cb593677a12a"} Feb 26 22:19:21 crc kubenswrapper[4910]: I0226 22:19:21.186112 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerStarted","Data":"055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3"} Feb 26 22:19:21 crc kubenswrapper[4910]: I0226 22:19:21.186732 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerStarted","Data":"41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97"} Feb 26 22:19:21 crc kubenswrapper[4910]: I0226 22:19:21.747283 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.196309 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerStarted","Data":"d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10"} Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.259521 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-z8wm4"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.261008 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.262506 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.263561 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.281923 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-z8wm4"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.381734 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94xx2\" (UniqueName: \"kubernetes.io/projected/c1a9974e-7def-47e8-b055-fc5412319aca-kube-api-access-94xx2\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.381825 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-config-data\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.382130 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.382237 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-scripts\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.462720 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.473046 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.475461 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.480122 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.481931 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.483581 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.483615 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-scripts\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.483661 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94xx2\" (UniqueName: \"kubernetes.io/projected/c1a9974e-7def-47e8-b055-fc5412319aca-kube-api-access-94xx2\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.483704 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-config-data\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.493414 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.497865 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.513305 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.513559 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-scripts\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.514062 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-config-data\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.519534 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.528827 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.530190 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.534633 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.535831 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94xx2\" (UniqueName: \"kubernetes.io/projected/c1a9974e-7def-47e8-b055-fc5412319aca-kube-api-access-94xx2\") pod \"nova-cell0-cell-mapping-z8wm4\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.571840 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.573148 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.576402 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.583945 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.591375 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-config-data\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595191 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlzld\" (UniqueName: \"kubernetes.io/projected/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-kube-api-access-tlzld\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595262 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7zcd\" (UniqueName: \"kubernetes.io/projected/57fd7488-e497-4abf-8875-bacde64f7cc3-kube-api-access-l7zcd\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595313 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-config-data\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595332 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6rm\" (UniqueName: \"kubernetes.io/projected/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-kube-api-access-vv6rm\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595360 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595382 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595404 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-config-data\") pod \"nova-scheduler-0\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595445 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6n5\" (UniqueName: \"kubernetes.io/projected/74ce1646-bdb6-4532-ac1b-d6291167c9d6-kube-api-access-rd6n5\") pod \"nova-scheduler-0\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595471 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-logs\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595501 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595615 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595679 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-logs\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595766 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.595921 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.686322 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.741053 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlzld\" (UniqueName: \"kubernetes.io/projected/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-kube-api-access-tlzld\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.742985 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7zcd\" (UniqueName: \"kubernetes.io/projected/57fd7488-e497-4abf-8875-bacde64f7cc3-kube-api-access-l7zcd\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743053 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-config-data\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743077 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv6rm\" (UniqueName: \"kubernetes.io/projected/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-kube-api-access-vv6rm\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743107 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743132 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743153 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-config-data\") pod \"nova-scheduler-0\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743206 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd6n5\" (UniqueName: \"kubernetes.io/projected/74ce1646-bdb6-4532-ac1b-d6291167c9d6-kube-api-access-rd6n5\") pod \"nova-scheduler-0\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743232 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-logs\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743262 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743344 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743383 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-logs\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743467 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.743493 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-config-data\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.747448 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-config-data\") pod \"nova-scheduler-0\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.749096 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-config-data\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.749774 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-logs\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.752639 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.761076 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-logs\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.765885 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.774613 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7zcd\" (UniqueName: \"kubernetes.io/projected/57fd7488-e497-4abf-8875-bacde64f7cc3-kube-api-access-l7zcd\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.780153 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlzld\" (UniqueName: \"kubernetes.io/projected/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-kube-api-access-tlzld\") pod \"nova-metadata-0\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.780603 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv6rm\" (UniqueName: \"kubernetes.io/projected/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-kube-api-access-vv6rm\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.789759 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.791865 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd6n5\" (UniqueName: \"kubernetes.io/projected/74ce1646-bdb6-4532-ac1b-d6291167c9d6-kube-api-access-rd6n5\") pod \"nova-scheduler-0\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.810800 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.812140 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-config-data\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.813738 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.818498 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-dmjkg"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.835723 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.845822 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.845884 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-config\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.845909 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.845969 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.846574 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.846768 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676ns\" (UniqueName: \"kubernetes.io/projected/36f998e9-29f3-4464-9f84-8fa4e0efda10-kube-api-access-676ns\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.881875 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-dmjkg"] Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.906877 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.931687 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.940650 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.960894 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-676ns\" (UniqueName: \"kubernetes.io/projected/36f998e9-29f3-4464-9f84-8fa4e0efda10-kube-api-access-676ns\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.961066 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.961259 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-config\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.961334 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.961425 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.961698 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.962745 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.964497 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.966668 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-config\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.966917 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.967034 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:22 crc kubenswrapper[4910]: I0226 22:19:22.985934 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-676ns\" (UniqueName: \"kubernetes.io/projected/36f998e9-29f3-4464-9f84-8fa4e0efda10-kube-api-access-676ns\") pod \"dnsmasq-dns-884c8b8f5-dmjkg\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:23 crc kubenswrapper[4910]: I0226 22:19:23.044791 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:19:23 crc kubenswrapper[4910]: I0226 22:19:23.207683 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:23 crc kubenswrapper[4910]: I0226 22:19:23.342663 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-z8wm4"] Feb 26 22:19:23 crc kubenswrapper[4910]: I0226 22:19:23.702205 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.101261 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.118878 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xmz7f"] Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.120274 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.128376 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.129611 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.135936 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.157841 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xmz7f"] Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.212424 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.215547 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6vml\" (UniqueName: \"kubernetes.io/projected/231fa51c-3886-4460-b26b-029dfe9d2166-kube-api-access-w6vml\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.216226 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.216379 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-config-data\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.216407 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-scripts\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.264609 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-dmjkg"] Feb 26 22:19:24 crc kubenswrapper[4910]: W0226 22:19:24.273294 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36f998e9_29f3_4464_9f84_8fa4e0efda10.slice/crio-64b257352a380107695ee0e9e1d35751dece94ed4bff50f81d64f57a4a5b1d7a WatchSource:0}: Error finding container 64b257352a380107695ee0e9e1d35751dece94ed4bff50f81d64f57a4a5b1d7a: Status 404 returned error can't find the container with id 64b257352a380107695ee0e9e1d35751dece94ed4bff50f81d64f57a4a5b1d7a Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.316332 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z8wm4" event={"ID":"c1a9974e-7def-47e8-b055-fc5412319aca","Type":"ContainerStarted","Data":"3d97fe51018179b31bccf7312461218857324cbb63ea1d435237cd969b07b806"} Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.318267 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z8wm4" event={"ID":"c1a9974e-7def-47e8-b055-fc5412319aca","Type":"ContainerStarted","Data":"af4c242aad053fa3ba4fedce5b6d9bce34bbbcbb9108125ce4149ced535456b2"} Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.318660 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.318720 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-config-data\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.318736 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-scripts\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.318764 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6vml\" (UniqueName: \"kubernetes.io/projected/231fa51c-3886-4460-b26b-029dfe9d2166-kube-api-access-w6vml\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.324579 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-config-data\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.324598 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.324757 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd","Type":"ContainerStarted","Data":"a6f55c498cdde5b875f07b1db78a25bea871bfe98586b099d8408b425656e4c1"} Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.330640 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-scripts\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.335730 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6vml\" (UniqueName: \"kubernetes.io/projected/231fa51c-3886-4460-b26b-029dfe9d2166-kube-api-access-w6vml\") pod \"nova-cell1-conductor-db-sync-xmz7f\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.343886 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57fd7488-e497-4abf-8875-bacde64f7cc3","Type":"ContainerStarted","Data":"be1cfb18c6528729bad1cae3c85da34c5eccb842c356ac3b1e12bcf975820818"} Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.347907 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-z8wm4" podStartSLOduration=2.347889905 podStartE2EDuration="2.347889905s" podCreationTimestamp="2026-02-26 22:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:24.334580542 +0000 UTC m=+1449.414071083" watchObservedRunningTime="2026-02-26 22:19:24.347889905 +0000 UTC m=+1449.427380446" Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.379581 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" event={"ID":"36f998e9-29f3-4464-9f84-8fa4e0efda10","Type":"ContainerStarted","Data":"64b257352a380107695ee0e9e1d35751dece94ed4bff50f81d64f57a4a5b1d7a"} Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.382723 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74ce1646-bdb6-4532-ac1b-d6291167c9d6","Type":"ContainerStarted","Data":"35a5805b2b7dc8baa779126c791c2d73dc3294ff0213e6fea4e25e3026688bd2"} Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.384320 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c","Type":"ContainerStarted","Data":"da492e806f047eec8218ec18e871bdb6146a3acfb27d97ffcf9eecf33e188ae1"} Feb 26 22:19:24 crc kubenswrapper[4910]: I0226 22:19:24.546969 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:25 crc kubenswrapper[4910]: I0226 22:19:25.163381 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xmz7f"] Feb 26 22:19:25 crc kubenswrapper[4910]: W0226 22:19:25.169669 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod231fa51c_3886_4460_b26b_029dfe9d2166.slice/crio-ed04a797e9210f19b0300369b712932ea560a5cb133e32fcfb4c02414a635ea1 WatchSource:0}: Error finding container ed04a797e9210f19b0300369b712932ea560a5cb133e32fcfb4c02414a635ea1: Status 404 returned error can't find the container with id ed04a797e9210f19b0300369b712932ea560a5cb133e32fcfb4c02414a635ea1 Feb 26 22:19:25 crc kubenswrapper[4910]: I0226 22:19:25.401274 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xmz7f" event={"ID":"231fa51c-3886-4460-b26b-029dfe9d2166","Type":"ContainerStarted","Data":"ed04a797e9210f19b0300369b712932ea560a5cb133e32fcfb4c02414a635ea1"} Feb 26 22:19:25 crc kubenswrapper[4910]: I0226 22:19:25.411074 4910 generic.go:334] "Generic (PLEG): container finished" podID="36f998e9-29f3-4464-9f84-8fa4e0efda10" containerID="ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff" exitCode=0 Feb 26 22:19:25 crc kubenswrapper[4910]: I0226 22:19:25.411149 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" event={"ID":"36f998e9-29f3-4464-9f84-8fa4e0efda10","Type":"ContainerDied","Data":"ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff"} Feb 26 22:19:25 crc kubenswrapper[4910]: I0226 22:19:25.439955 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerStarted","Data":"d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d"} Feb 26 22:19:25 crc kubenswrapper[4910]: I0226 22:19:25.441649 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 22:19:25 crc kubenswrapper[4910]: I0226 22:19:25.499393 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.890556893 podStartE2EDuration="7.499377047s" podCreationTimestamp="2026-02-26 22:19:18 +0000 UTC" firstStartedPulling="2026-02-26 22:19:19.396880292 +0000 UTC m=+1444.476370833" lastFinishedPulling="2026-02-26 22:19:24.005700416 +0000 UTC m=+1449.085190987" observedRunningTime="2026-02-26 22:19:25.473626874 +0000 UTC m=+1450.553117415" watchObservedRunningTime="2026-02-26 22:19:25.499377047 +0000 UTC m=+1450.578867588" Feb 26 22:19:26 crc kubenswrapper[4910]: I0226 22:19:26.450616 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xmz7f" event={"ID":"231fa51c-3886-4460-b26b-029dfe9d2166","Type":"ContainerStarted","Data":"9294495cf28254d4ff512a0e6d5b15d7b0824e374f5984dd0a2819016cf443ab"} Feb 26 22:19:26 crc kubenswrapper[4910]: I0226 22:19:26.454055 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" event={"ID":"36f998e9-29f3-4464-9f84-8fa4e0efda10","Type":"ContainerStarted","Data":"abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c"} Feb 26 22:19:26 crc kubenswrapper[4910]: I0226 22:19:26.470414 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xmz7f" podStartSLOduration=2.470398558 podStartE2EDuration="2.470398558s" podCreationTimestamp="2026-02-26 22:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:26.466598484 +0000 UTC m=+1451.546089025" watchObservedRunningTime="2026-02-26 22:19:26.470398558 +0000 UTC m=+1451.549889089" Feb 26 22:19:26 crc kubenswrapper[4910]: I0226 22:19:26.505719 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" podStartSLOduration=4.50570103 podStartE2EDuration="4.50570103s" podCreationTimestamp="2026-02-26 22:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:26.496686665 +0000 UTC m=+1451.576177206" watchObservedRunningTime="2026-02-26 22:19:26.50570103 +0000 UTC m=+1451.585191571" Feb 26 22:19:26 crc kubenswrapper[4910]: I0226 22:19:26.667720 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:26 crc kubenswrapper[4910]: I0226 22:19:26.682625 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 22:19:27 crc kubenswrapper[4910]: I0226 22:19:27.469334 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:29 crc kubenswrapper[4910]: I0226 22:19:29.496014 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74ce1646-bdb6-4532-ac1b-d6291167c9d6","Type":"ContainerStarted","Data":"670ce0521b9598d48e38c3ebba0490603beb0377e096af805db3555beb3465cd"} Feb 26 22:19:29 crc kubenswrapper[4910]: I0226 22:19:29.499911 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c","Type":"ContainerStarted","Data":"a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616"} Feb 26 22:19:29 crc kubenswrapper[4910]: I0226 22:19:29.502129 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd","Type":"ContainerStarted","Data":"c04ae3135f2f30124df1eec273c2d32eda57b0d78a345850b653320b05b0c6ff"} Feb 26 22:19:29 crc kubenswrapper[4910]: I0226 22:19:29.503680 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57fd7488-e497-4abf-8875-bacde64f7cc3","Type":"ContainerStarted","Data":"18d69e7098ec56c72b16ab1ed1eb56be02ee031957db05722e7bfea4291dcc62"} Feb 26 22:19:29 crc kubenswrapper[4910]: I0226 22:19:29.503794 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="57fd7488-e497-4abf-8875-bacde64f7cc3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://18d69e7098ec56c72b16ab1ed1eb56be02ee031957db05722e7bfea4291dcc62" gracePeriod=30 Feb 26 22:19:29 crc kubenswrapper[4910]: I0226 22:19:29.521123 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.631164567 podStartE2EDuration="7.521104996s" podCreationTimestamp="2026-02-26 22:19:22 +0000 UTC" firstStartedPulling="2026-02-26 22:19:24.184384067 +0000 UTC m=+1449.263874608" lastFinishedPulling="2026-02-26 22:19:29.074324486 +0000 UTC m=+1454.153815037" observedRunningTime="2026-02-26 22:19:29.508724838 +0000 UTC m=+1454.588215379" watchObservedRunningTime="2026-02-26 22:19:29.521104996 +0000 UTC m=+1454.600595537" Feb 26 22:19:29 crc kubenswrapper[4910]: I0226 22:19:29.543574 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.584728921 podStartE2EDuration="7.543556858s" podCreationTimestamp="2026-02-26 22:19:22 +0000 UTC" firstStartedPulling="2026-02-26 22:19:24.123321103 +0000 UTC m=+1449.202811644" lastFinishedPulling="2026-02-26 22:19:29.08214904 +0000 UTC m=+1454.161639581" observedRunningTime="2026-02-26 22:19:29.52638888 +0000 UTC m=+1454.605879431" watchObservedRunningTime="2026-02-26 22:19:29.543556858 +0000 UTC m=+1454.623047399" Feb 26 22:19:30 crc kubenswrapper[4910]: I0226 22:19:30.517664 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c","Type":"ContainerStarted","Data":"d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8"} Feb 26 22:19:30 crc kubenswrapper[4910]: I0226 22:19:30.517847 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerName="nova-metadata-log" containerID="cri-o://a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616" gracePeriod=30 Feb 26 22:19:30 crc kubenswrapper[4910]: I0226 22:19:30.517833 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerName="nova-metadata-metadata" containerID="cri-o://d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8" gracePeriod=30 Feb 26 22:19:30 crc kubenswrapper[4910]: I0226 22:19:30.521263 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd","Type":"ContainerStarted","Data":"e73d553463caeeff6a90a26c65e9e7e15611c80ac8d027510cfefa131262c8e5"} Feb 26 22:19:30 crc kubenswrapper[4910]: I0226 22:19:30.545319 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.182504439 podStartE2EDuration="8.545303458s" podCreationTimestamp="2026-02-26 22:19:22 +0000 UTC" firstStartedPulling="2026-02-26 22:19:23.713418628 +0000 UTC m=+1448.792909169" lastFinishedPulling="2026-02-26 22:19:29.076217647 +0000 UTC m=+1454.155708188" observedRunningTime="2026-02-26 22:19:30.544213438 +0000 UTC m=+1455.623703999" watchObservedRunningTime="2026-02-26 22:19:30.545303458 +0000 UTC m=+1455.624793999" Feb 26 22:19:30 crc kubenswrapper[4910]: I0226 22:19:30.578116 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.654997259 podStartE2EDuration="8.578098241s" podCreationTimestamp="2026-02-26 22:19:22 +0000 UTC" firstStartedPulling="2026-02-26 22:19:24.147302297 +0000 UTC m=+1449.226792838" lastFinishedPulling="2026-02-26 22:19:29.070403259 +0000 UTC m=+1454.149893820" observedRunningTime="2026-02-26 22:19:30.570734401 +0000 UTC m=+1455.650224932" watchObservedRunningTime="2026-02-26 22:19:30.578098241 +0000 UTC m=+1455.657588782" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.301743 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.394040 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-config-data\") pod \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.397247 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlzld\" (UniqueName: \"kubernetes.io/projected/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-kube-api-access-tlzld\") pod \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.397336 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-combined-ca-bundle\") pod \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.397464 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-logs\") pod \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\" (UID: \"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c\") " Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.404695 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-kube-api-access-tlzld" (OuterVolumeSpecName: "kube-api-access-tlzld") pod "853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" (UID: "853703f3-eaa3-41f9-b316-ca5cd0e9aa6c"). InnerVolumeSpecName "kube-api-access-tlzld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.406404 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-logs" (OuterVolumeSpecName: "logs") pod "853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" (UID: "853703f3-eaa3-41f9-b316-ca5cd0e9aa6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.410143 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlzld\" (UniqueName: \"kubernetes.io/projected/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-kube-api-access-tlzld\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.410230 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.457451 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-config-data" (OuterVolumeSpecName: "config-data") pod "853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" (UID: "853703f3-eaa3-41f9-b316-ca5cd0e9aa6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.476271 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" (UID: "853703f3-eaa3-41f9-b316-ca5cd0e9aa6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.515724 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.515982 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.545977 4910 generic.go:334] "Generic (PLEG): container finished" podID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerID="d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8" exitCode=0 Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.546010 4910 generic.go:334] "Generic (PLEG): container finished" podID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerID="a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616" exitCode=143 Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.546909 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.548683 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c","Type":"ContainerDied","Data":"d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8"} Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.548749 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c","Type":"ContainerDied","Data":"a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616"} Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.548766 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"853703f3-eaa3-41f9-b316-ca5cd0e9aa6c","Type":"ContainerDied","Data":"da492e806f047eec8218ec18e871bdb6146a3acfb27d97ffcf9eecf33e188ae1"} Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.548785 4910 scope.go:117] "RemoveContainer" containerID="d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.596929 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.609312 4910 scope.go:117] "RemoveContainer" containerID="a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.616510 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.638688 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:31 crc kubenswrapper[4910]: E0226 22:19:31.639102 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerName="nova-metadata-log" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.639120 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerName="nova-metadata-log" Feb 26 22:19:31 crc kubenswrapper[4910]: E0226 22:19:31.639132 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerName="nova-metadata-metadata" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.639138 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerName="nova-metadata-metadata" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.639362 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerName="nova-metadata-log" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.639382 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" containerName="nova-metadata-metadata" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.656363 4910 scope.go:117] "RemoveContainer" containerID="d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.657029 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.657124 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: E0226 22:19:31.657312 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8\": container with ID starting with d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8 not found: ID does not exist" containerID="d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.657419 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8"} err="failed to get container status \"d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8\": rpc error: code = NotFound desc = could not find container \"d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8\": container with ID starting with d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8 not found: ID does not exist" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.657513 4910 scope.go:117] "RemoveContainer" containerID="a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.661780 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.662174 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 22:19:31 crc kubenswrapper[4910]: E0226 22:19:31.662234 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616\": container with ID starting with a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616 not found: ID does not exist" containerID="a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.662425 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616"} err="failed to get container status \"a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616\": rpc error: code = NotFound desc = could not find container \"a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616\": container with ID starting with a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616 not found: ID does not exist" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.662505 4910 scope.go:117] "RemoveContainer" containerID="d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.664885 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8"} err="failed to get container status \"d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8\": rpc error: code = NotFound desc = could not find container \"d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8\": container with ID starting with d144aaac91fdcdb74e515b1b23f26ba19739e99f52d20e634d243cd9f812a9a8 not found: ID does not exist" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.664924 4910 scope.go:117] "RemoveContainer" containerID="a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.665204 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616"} err="failed to get container status \"a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616\": rpc error: code = NotFound desc = could not find container \"a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616\": container with ID starting with a9652a43acc88370ef8bf75427b1d151f6d2247d438d8cd41734642533293616 not found: ID does not exist" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.722178 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88fc2d90-0474-40e3-97fe-dcd107a65523-logs\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.722277 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-config-data\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.722309 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.722326 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.722397 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzhbg\" (UniqueName: \"kubernetes.io/projected/88fc2d90-0474-40e3-97fe-dcd107a65523-kube-api-access-wzhbg\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.824331 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzhbg\" (UniqueName: \"kubernetes.io/projected/88fc2d90-0474-40e3-97fe-dcd107a65523-kube-api-access-wzhbg\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.824454 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88fc2d90-0474-40e3-97fe-dcd107a65523-logs\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.824531 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-config-data\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.824561 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.824578 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.825673 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88fc2d90-0474-40e3-97fe-dcd107a65523-logs\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.836015 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.836145 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-config-data\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.836273 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.842955 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzhbg\" (UniqueName: \"kubernetes.io/projected/88fc2d90-0474-40e3-97fe-dcd107a65523-kube-api-access-wzhbg\") pod \"nova-metadata-0\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " pod="openstack/nova-metadata-0" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.914687 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853703f3-eaa3-41f9-b316-ca5cd0e9aa6c" path="/var/lib/kubelet/pods/853703f3-eaa3-41f9-b316-ca5cd0e9aa6c/volumes" Feb 26 22:19:31 crc kubenswrapper[4910]: I0226 22:19:31.977895 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:32 crc kubenswrapper[4910]: I0226 22:19:32.544761 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:32 crc kubenswrapper[4910]: I0226 22:19:32.933139 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 22:19:32 crc kubenswrapper[4910]: I0226 22:19:32.933189 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 22:19:32 crc kubenswrapper[4910]: I0226 22:19:32.941179 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.045972 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.047127 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.077428 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.209409 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.309733 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-jlw2k"] Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.310033 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" podUID="309e28ac-a722-4ea7-98e1-80c3dec84033" containerName="dnsmasq-dns" containerID="cri-o://92b49234641c728d38d0cad38dff539431254d4997763af7885ab2d9c5feb2aa" gracePeriod=10 Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.577450 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"88fc2d90-0474-40e3-97fe-dcd107a65523","Type":"ContainerStarted","Data":"40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960"} Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.577552 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"88fc2d90-0474-40e3-97fe-dcd107a65523","Type":"ContainerStarted","Data":"458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9"} Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.577588 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"88fc2d90-0474-40e3-97fe-dcd107a65523","Type":"ContainerStarted","Data":"42f53c76736e54300ad543d9507e67e081c147e36200d2849c31e7c7159beab2"} Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.579547 4910 generic.go:334] "Generic (PLEG): container finished" podID="309e28ac-a722-4ea7-98e1-80c3dec84033" containerID="92b49234641c728d38d0cad38dff539431254d4997763af7885ab2d9c5feb2aa" exitCode=0 Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.580085 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" event={"ID":"309e28ac-a722-4ea7-98e1-80c3dec84033","Type":"ContainerDied","Data":"92b49234641c728d38d0cad38dff539431254d4997763af7885ab2d9c5feb2aa"} Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.610569 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.610544611 podStartE2EDuration="2.610544611s" podCreationTimestamp="2026-02-26 22:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:33.599229203 +0000 UTC m=+1458.678719754" watchObservedRunningTime="2026-02-26 22:19:33.610544611 +0000 UTC m=+1458.690035152" Feb 26 22:19:33 crc kubenswrapper[4910]: I0226 22:19:33.631907 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.029432 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.030223 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.130240 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.190822 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-svc\") pod \"309e28ac-a722-4ea7-98e1-80c3dec84033\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.190880 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-nb\") pod \"309e28ac-a722-4ea7-98e1-80c3dec84033\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.190902 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-sb\") pod \"309e28ac-a722-4ea7-98e1-80c3dec84033\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.191038 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-swift-storage-0\") pod \"309e28ac-a722-4ea7-98e1-80c3dec84033\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.191066 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv8ls\" (UniqueName: \"kubernetes.io/projected/309e28ac-a722-4ea7-98e1-80c3dec84033-kube-api-access-hv8ls\") pod \"309e28ac-a722-4ea7-98e1-80c3dec84033\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.191137 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-config\") pod \"309e28ac-a722-4ea7-98e1-80c3dec84033\" (UID: \"309e28ac-a722-4ea7-98e1-80c3dec84033\") " Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.204609 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309e28ac-a722-4ea7-98e1-80c3dec84033-kube-api-access-hv8ls" (OuterVolumeSpecName: "kube-api-access-hv8ls") pod "309e28ac-a722-4ea7-98e1-80c3dec84033" (UID: "309e28ac-a722-4ea7-98e1-80c3dec84033"). InnerVolumeSpecName "kube-api-access-hv8ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.273600 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "309e28ac-a722-4ea7-98e1-80c3dec84033" (UID: "309e28ac-a722-4ea7-98e1-80c3dec84033"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.276133 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "309e28ac-a722-4ea7-98e1-80c3dec84033" (UID: "309e28ac-a722-4ea7-98e1-80c3dec84033"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.290022 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "309e28ac-a722-4ea7-98e1-80c3dec84033" (UID: "309e28ac-a722-4ea7-98e1-80c3dec84033"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.293114 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.293147 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.293215 4910 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.293227 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv8ls\" (UniqueName: \"kubernetes.io/projected/309e28ac-a722-4ea7-98e1-80c3dec84033-kube-api-access-hv8ls\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.294140 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "309e28ac-a722-4ea7-98e1-80c3dec84033" (UID: "309e28ac-a722-4ea7-98e1-80c3dec84033"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.309984 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-config" (OuterVolumeSpecName: "config") pod "309e28ac-a722-4ea7-98e1-80c3dec84033" (UID: "309e28ac-a722-4ea7-98e1-80c3dec84033"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.395538 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.395564 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309e28ac-a722-4ea7-98e1-80c3dec84033-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.591202 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" event={"ID":"309e28ac-a722-4ea7-98e1-80c3dec84033","Type":"ContainerDied","Data":"a1331aa32dd5248ec66c66dc3f08a0bd62eeba29a17c1df4a1ee819b71e8abfe"} Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.591251 4910 scope.go:117] "RemoveContainer" containerID="92b49234641c728d38d0cad38dff539431254d4997763af7885ab2d9c5feb2aa" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.591360 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-jlw2k" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.617805 4910 generic.go:334] "Generic (PLEG): container finished" podID="c1a9974e-7def-47e8-b055-fc5412319aca" containerID="3d97fe51018179b31bccf7312461218857324cbb63ea1d435237cd969b07b806" exitCode=0 Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.617883 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z8wm4" event={"ID":"c1a9974e-7def-47e8-b055-fc5412319aca","Type":"ContainerDied","Data":"3d97fe51018179b31bccf7312461218857324cbb63ea1d435237cd969b07b806"} Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.649892 4910 scope.go:117] "RemoveContainer" containerID="3fd24aea6772d68cd5c89c240237371f9b93420d881dac202acc6e5317e20b15" Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.656098 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-jlw2k"] Feb 26 22:19:34 crc kubenswrapper[4910]: I0226 22:19:34.668244 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-jlw2k"] Feb 26 22:19:35 crc kubenswrapper[4910]: I0226 22:19:35.632415 4910 generic.go:334] "Generic (PLEG): container finished" podID="231fa51c-3886-4460-b26b-029dfe9d2166" containerID="9294495cf28254d4ff512a0e6d5b15d7b0824e374f5984dd0a2819016cf443ab" exitCode=0 Feb 26 22:19:35 crc kubenswrapper[4910]: I0226 22:19:35.632504 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xmz7f" event={"ID":"231fa51c-3886-4460-b26b-029dfe9d2166","Type":"ContainerDied","Data":"9294495cf28254d4ff512a0e6d5b15d7b0824e374f5984dd0a2819016cf443ab"} Feb 26 22:19:35 crc kubenswrapper[4910]: I0226 22:19:35.922348 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309e28ac-a722-4ea7-98e1-80c3dec84033" path="/var/lib/kubelet/pods/309e28ac-a722-4ea7-98e1-80c3dec84033/volumes" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.064629 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.135900 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94xx2\" (UniqueName: \"kubernetes.io/projected/c1a9974e-7def-47e8-b055-fc5412319aca-kube-api-access-94xx2\") pod \"c1a9974e-7def-47e8-b055-fc5412319aca\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.136025 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-combined-ca-bundle\") pod \"c1a9974e-7def-47e8-b055-fc5412319aca\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.136085 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-config-data\") pod \"c1a9974e-7def-47e8-b055-fc5412319aca\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.136128 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-scripts\") pod \"c1a9974e-7def-47e8-b055-fc5412319aca\" (UID: \"c1a9974e-7def-47e8-b055-fc5412319aca\") " Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.158453 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-scripts" (OuterVolumeSpecName: "scripts") pod "c1a9974e-7def-47e8-b055-fc5412319aca" (UID: "c1a9974e-7def-47e8-b055-fc5412319aca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.161040 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a9974e-7def-47e8-b055-fc5412319aca-kube-api-access-94xx2" (OuterVolumeSpecName: "kube-api-access-94xx2") pod "c1a9974e-7def-47e8-b055-fc5412319aca" (UID: "c1a9974e-7def-47e8-b055-fc5412319aca"). InnerVolumeSpecName "kube-api-access-94xx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.171334 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-config-data" (OuterVolumeSpecName: "config-data") pod "c1a9974e-7def-47e8-b055-fc5412319aca" (UID: "c1a9974e-7def-47e8-b055-fc5412319aca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.175328 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1a9974e-7def-47e8-b055-fc5412319aca" (UID: "c1a9974e-7def-47e8-b055-fc5412319aca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.239937 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94xx2\" (UniqueName: \"kubernetes.io/projected/c1a9974e-7def-47e8-b055-fc5412319aca-kube-api-access-94xx2\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.240551 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.240637 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.240741 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a9974e-7def-47e8-b055-fc5412319aca-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.648011 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z8wm4" event={"ID":"c1a9974e-7def-47e8-b055-fc5412319aca","Type":"ContainerDied","Data":"af4c242aad053fa3ba4fedce5b6d9bce34bbbcbb9108125ce4149ced535456b2"} Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.648088 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af4c242aad053fa3ba4fedce5b6d9bce34bbbcbb9108125ce4149ced535456b2" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.648209 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z8wm4" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.967703 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.967928 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-log" containerID="cri-o://c04ae3135f2f30124df1eec273c2d32eda57b0d78a345850b653320b05b0c6ff" gracePeriod=30 Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.968365 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-api" containerID="cri-o://e73d553463caeeff6a90a26c65e9e7e15611c80ac8d027510cfefa131262c8e5" gracePeriod=30 Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.979352 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.980538 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.991608 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:19:36 crc kubenswrapper[4910]: I0226 22:19:36.991830 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="74ce1646-bdb6-4532-ac1b-d6291167c9d6" containerName="nova-scheduler-scheduler" containerID="cri-o://670ce0521b9598d48e38c3ebba0490603beb0377e096af805db3555beb3465cd" gracePeriod=30 Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.007659 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.165982 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.262915 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-combined-ca-bundle\") pod \"231fa51c-3886-4460-b26b-029dfe9d2166\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.263033 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-config-data\") pod \"231fa51c-3886-4460-b26b-029dfe9d2166\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.263114 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6vml\" (UniqueName: \"kubernetes.io/projected/231fa51c-3886-4460-b26b-029dfe9d2166-kube-api-access-w6vml\") pod \"231fa51c-3886-4460-b26b-029dfe9d2166\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.263143 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-scripts\") pod \"231fa51c-3886-4460-b26b-029dfe9d2166\" (UID: \"231fa51c-3886-4460-b26b-029dfe9d2166\") " Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.274919 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231fa51c-3886-4460-b26b-029dfe9d2166-kube-api-access-w6vml" (OuterVolumeSpecName: "kube-api-access-w6vml") pod "231fa51c-3886-4460-b26b-029dfe9d2166" (UID: "231fa51c-3886-4460-b26b-029dfe9d2166"). InnerVolumeSpecName "kube-api-access-w6vml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.284096 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-scripts" (OuterVolumeSpecName: "scripts") pod "231fa51c-3886-4460-b26b-029dfe9d2166" (UID: "231fa51c-3886-4460-b26b-029dfe9d2166"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.289482 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-config-data" (OuterVolumeSpecName: "config-data") pod "231fa51c-3886-4460-b26b-029dfe9d2166" (UID: "231fa51c-3886-4460-b26b-029dfe9d2166"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.304342 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "231fa51c-3886-4460-b26b-029dfe9d2166" (UID: "231fa51c-3886-4460-b26b-029dfe9d2166"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.366941 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6vml\" (UniqueName: \"kubernetes.io/projected/231fa51c-3886-4460-b26b-029dfe9d2166-kube-api-access-w6vml\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.366979 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.366991 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.366999 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231fa51c-3886-4460-b26b-029dfe9d2166-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.658140 4910 generic.go:334] "Generic (PLEG): container finished" podID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerID="c04ae3135f2f30124df1eec273c2d32eda57b0d78a345850b653320b05b0c6ff" exitCode=143 Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.658226 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd","Type":"ContainerDied","Data":"c04ae3135f2f30124df1eec273c2d32eda57b0d78a345850b653320b05b0c6ff"} Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.660284 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xmz7f" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.665353 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xmz7f" event={"ID":"231fa51c-3886-4460-b26b-029dfe9d2166","Type":"ContainerDied","Data":"ed04a797e9210f19b0300369b712932ea560a5cb133e32fcfb4c02414a635ea1"} Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.665388 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed04a797e9210f19b0300369b712932ea560a5cb133e32fcfb4c02414a635ea1" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.808638 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 22:19:37 crc kubenswrapper[4910]: E0226 22:19:37.809038 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309e28ac-a722-4ea7-98e1-80c3dec84033" containerName="dnsmasq-dns" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.809054 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="309e28ac-a722-4ea7-98e1-80c3dec84033" containerName="dnsmasq-dns" Feb 26 22:19:37 crc kubenswrapper[4910]: E0226 22:19:37.809068 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309e28ac-a722-4ea7-98e1-80c3dec84033" containerName="init" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.809074 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="309e28ac-a722-4ea7-98e1-80c3dec84033" containerName="init" Feb 26 22:19:37 crc kubenswrapper[4910]: E0226 22:19:37.809084 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231fa51c-3886-4460-b26b-029dfe9d2166" containerName="nova-cell1-conductor-db-sync" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.809090 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="231fa51c-3886-4460-b26b-029dfe9d2166" containerName="nova-cell1-conductor-db-sync" Feb 26 22:19:37 crc kubenswrapper[4910]: E0226 22:19:37.809114 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9974e-7def-47e8-b055-fc5412319aca" containerName="nova-manage" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.809120 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9974e-7def-47e8-b055-fc5412319aca" containerName="nova-manage" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.809320 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="309e28ac-a722-4ea7-98e1-80c3dec84033" containerName="dnsmasq-dns" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.809338 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="231fa51c-3886-4460-b26b-029dfe9d2166" containerName="nova-cell1-conductor-db-sync" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.809354 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a9974e-7def-47e8-b055-fc5412319aca" containerName="nova-manage" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.810253 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.824385 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.827994 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.882298 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64kqh\" (UniqueName: \"kubernetes.io/projected/05c4c120-7406-46d0-a5e8-157f624d4d13-kube-api-access-64kqh\") pod \"nova-cell1-conductor-0\" (UID: \"05c4c120-7406-46d0-a5e8-157f624d4d13\") " pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.882388 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c4c120-7406-46d0-a5e8-157f624d4d13-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"05c4c120-7406-46d0-a5e8-157f624d4d13\") " pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.882412 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c4c120-7406-46d0-a5e8-157f624d4d13-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"05c4c120-7406-46d0-a5e8-157f624d4d13\") " pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.984626 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64kqh\" (UniqueName: \"kubernetes.io/projected/05c4c120-7406-46d0-a5e8-157f624d4d13-kube-api-access-64kqh\") pod \"nova-cell1-conductor-0\" (UID: \"05c4c120-7406-46d0-a5e8-157f624d4d13\") " pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.985040 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c4c120-7406-46d0-a5e8-157f624d4d13-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"05c4c120-7406-46d0-a5e8-157f624d4d13\") " pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.985069 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c4c120-7406-46d0-a5e8-157f624d4d13-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"05c4c120-7406-46d0-a5e8-157f624d4d13\") " pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.993883 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c4c120-7406-46d0-a5e8-157f624d4d13-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"05c4c120-7406-46d0-a5e8-157f624d4d13\") " pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:37 crc kubenswrapper[4910]: I0226 22:19:37.994755 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c4c120-7406-46d0-a5e8-157f624d4d13-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"05c4c120-7406-46d0-a5e8-157f624d4d13\") " pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:38 crc kubenswrapper[4910]: I0226 22:19:38.007990 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64kqh\" (UniqueName: \"kubernetes.io/projected/05c4c120-7406-46d0-a5e8-157f624d4d13-kube-api-access-64kqh\") pod \"nova-cell1-conductor-0\" (UID: \"05c4c120-7406-46d0-a5e8-157f624d4d13\") " pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:38 crc kubenswrapper[4910]: E0226 22:19:38.046948 4910 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="670ce0521b9598d48e38c3ebba0490603beb0377e096af805db3555beb3465cd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 22:19:38 crc kubenswrapper[4910]: E0226 22:19:38.047940 4910 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="670ce0521b9598d48e38c3ebba0490603beb0377e096af805db3555beb3465cd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 22:19:38 crc kubenswrapper[4910]: E0226 22:19:38.049396 4910 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="670ce0521b9598d48e38c3ebba0490603beb0377e096af805db3555beb3465cd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 22:19:38 crc kubenswrapper[4910]: E0226 22:19:38.049429 4910 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="74ce1646-bdb6-4532-ac1b-d6291167c9d6" containerName="nova-scheduler-scheduler" Feb 26 22:19:38 crc kubenswrapper[4910]: I0226 22:19:38.139809 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:38 crc kubenswrapper[4910]: I0226 22:19:38.649490 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 22:19:38 crc kubenswrapper[4910]: I0226 22:19:38.680426 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"05c4c120-7406-46d0-a5e8-157f624d4d13","Type":"ContainerStarted","Data":"a969113e7a0a5566f86486af5bffa7ac666583c5aa3f0dce852b4bc8d410dc7b"} Feb 26 22:19:38 crc kubenswrapper[4910]: I0226 22:19:38.680664 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerName="nova-metadata-metadata" containerID="cri-o://40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960" gracePeriod=30 Feb 26 22:19:38 crc kubenswrapper[4910]: I0226 22:19:38.680696 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerName="nova-metadata-log" containerID="cri-o://458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9" gracePeriod=30 Feb 26 22:19:38 crc kubenswrapper[4910]: E0226 22:19:38.805431 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88fc2d90_0474_40e3_97fe_dcd107a65523.slice/crio-458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.359448 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.413181 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzhbg\" (UniqueName: \"kubernetes.io/projected/88fc2d90-0474-40e3-97fe-dcd107a65523-kube-api-access-wzhbg\") pod \"88fc2d90-0474-40e3-97fe-dcd107a65523\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.413322 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-config-data\") pod \"88fc2d90-0474-40e3-97fe-dcd107a65523\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.413353 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-combined-ca-bundle\") pod \"88fc2d90-0474-40e3-97fe-dcd107a65523\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.413405 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88fc2d90-0474-40e3-97fe-dcd107a65523-logs\") pod \"88fc2d90-0474-40e3-97fe-dcd107a65523\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.413548 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-nova-metadata-tls-certs\") pod \"88fc2d90-0474-40e3-97fe-dcd107a65523\" (UID: \"88fc2d90-0474-40e3-97fe-dcd107a65523\") " Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.420770 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fc2d90-0474-40e3-97fe-dcd107a65523-logs" (OuterVolumeSpecName: "logs") pod "88fc2d90-0474-40e3-97fe-dcd107a65523" (UID: "88fc2d90-0474-40e3-97fe-dcd107a65523"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.423701 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fc2d90-0474-40e3-97fe-dcd107a65523-kube-api-access-wzhbg" (OuterVolumeSpecName: "kube-api-access-wzhbg") pod "88fc2d90-0474-40e3-97fe-dcd107a65523" (UID: "88fc2d90-0474-40e3-97fe-dcd107a65523"). InnerVolumeSpecName "kube-api-access-wzhbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.452342 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-config-data" (OuterVolumeSpecName: "config-data") pod "88fc2d90-0474-40e3-97fe-dcd107a65523" (UID: "88fc2d90-0474-40e3-97fe-dcd107a65523"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.460923 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88fc2d90-0474-40e3-97fe-dcd107a65523" (UID: "88fc2d90-0474-40e3-97fe-dcd107a65523"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.490300 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "88fc2d90-0474-40e3-97fe-dcd107a65523" (UID: "88fc2d90-0474-40e3-97fe-dcd107a65523"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.516382 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzhbg\" (UniqueName: \"kubernetes.io/projected/88fc2d90-0474-40e3-97fe-dcd107a65523-kube-api-access-wzhbg\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.516425 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.516440 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.516454 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88fc2d90-0474-40e3-97fe-dcd107a65523-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.516466 4910 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc2d90-0474-40e3-97fe-dcd107a65523-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.693433 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"05c4c120-7406-46d0-a5e8-157f624d4d13","Type":"ContainerStarted","Data":"92385b99034dfa262411a97869744fc84f21aeaa61d31afc05e7d3f95802a20b"} Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.694255 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.695851 4910 generic.go:334] "Generic (PLEG): container finished" podID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerID="40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960" exitCode=0 Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.695898 4910 generic.go:334] "Generic (PLEG): container finished" podID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerID="458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9" exitCode=143 Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.695927 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.695920 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"88fc2d90-0474-40e3-97fe-dcd107a65523","Type":"ContainerDied","Data":"40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960"} Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.696065 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"88fc2d90-0474-40e3-97fe-dcd107a65523","Type":"ContainerDied","Data":"458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9"} Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.696078 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"88fc2d90-0474-40e3-97fe-dcd107a65523","Type":"ContainerDied","Data":"42f53c76736e54300ad543d9507e67e081c147e36200d2849c31e7c7159beab2"} Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.696098 4910 scope.go:117] "RemoveContainer" containerID="40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.730672 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.730645994 podStartE2EDuration="2.730645994s" podCreationTimestamp="2026-02-26 22:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:39.712656844 +0000 UTC m=+1464.792147385" watchObservedRunningTime="2026-02-26 22:19:39.730645994 +0000 UTC m=+1464.810136545" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.735001 4910 scope.go:117] "RemoveContainer" containerID="458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.758862 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.760745 4910 scope.go:117] "RemoveContainer" containerID="40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960" Feb 26 22:19:39 crc kubenswrapper[4910]: E0226 22:19:39.761844 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960\": container with ID starting with 40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960 not found: ID does not exist" containerID="40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.761878 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960"} err="failed to get container status \"40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960\": rpc error: code = NotFound desc = could not find container \"40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960\": container with ID starting with 40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960 not found: ID does not exist" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.761902 4910 scope.go:117] "RemoveContainer" containerID="458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9" Feb 26 22:19:39 crc kubenswrapper[4910]: E0226 22:19:39.762368 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9\": container with ID starting with 458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9 not found: ID does not exist" containerID="458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.762437 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9"} err="failed to get container status \"458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9\": rpc error: code = NotFound desc = could not find container \"458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9\": container with ID starting with 458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9 not found: ID does not exist" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.762471 4910 scope.go:117] "RemoveContainer" containerID="40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.762866 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960"} err="failed to get container status \"40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960\": rpc error: code = NotFound desc = could not find container \"40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960\": container with ID starting with 40171d9b2c9baab7b1601669ceb7b0157567a7167e9a2d665e1419dbb218a960 not found: ID does not exist" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.762889 4910 scope.go:117] "RemoveContainer" containerID="458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.763217 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9"} err="failed to get container status \"458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9\": rpc error: code = NotFound desc = could not find container \"458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9\": container with ID starting with 458389b7118183bbd689cc951fc75b0dc7a7fd79cca6200795de658f5b5893d9 not found: ID does not exist" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.770513 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.788268 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:39 crc kubenswrapper[4910]: E0226 22:19:39.788783 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerName="nova-metadata-log" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.788805 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerName="nova-metadata-log" Feb 26 22:19:39 crc kubenswrapper[4910]: E0226 22:19:39.788818 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerName="nova-metadata-metadata" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.788825 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerName="nova-metadata-metadata" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.789026 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerName="nova-metadata-metadata" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.789058 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fc2d90-0474-40e3-97fe-dcd107a65523" containerName="nova-metadata-log" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.795006 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.798828 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.799114 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.822192 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.822955 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23eb0be8-742b-4fa8-acea-74668f976e0c-logs\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.823068 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-config-data\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.824973 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzl6s\" (UniqueName: \"kubernetes.io/projected/23eb0be8-742b-4fa8-acea-74668f976e0c-kube-api-access-tzl6s\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.825210 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.825315 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.912607 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fc2d90-0474-40e3-97fe-dcd107a65523" path="/var/lib/kubelet/pods/88fc2d90-0474-40e3-97fe-dcd107a65523/volumes" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.927064 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-config-data\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.927196 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzl6s\" (UniqueName: \"kubernetes.io/projected/23eb0be8-742b-4fa8-acea-74668f976e0c-kube-api-access-tzl6s\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.927250 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.927295 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.927362 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23eb0be8-742b-4fa8-acea-74668f976e0c-logs\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.931494 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23eb0be8-742b-4fa8-acea-74668f976e0c-logs\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.931582 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.931673 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.932228 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-config-data\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:39 crc kubenswrapper[4910]: I0226 22:19:39.947828 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzl6s\" (UniqueName: \"kubernetes.io/projected/23eb0be8-742b-4fa8-acea-74668f976e0c-kube-api-access-tzl6s\") pod \"nova-metadata-0\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " pod="openstack/nova-metadata-0" Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.124622 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.715322 4910 generic.go:334] "Generic (PLEG): container finished" podID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerID="e73d553463caeeff6a90a26c65e9e7e15611c80ac8d027510cfefa131262c8e5" exitCode=0 Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.715584 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd","Type":"ContainerDied","Data":"e73d553463caeeff6a90a26c65e9e7e15611c80ac8d027510cfefa131262c8e5"} Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.822997 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:19:40 crc kubenswrapper[4910]: W0226 22:19:40.831569 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23eb0be8_742b_4fa8_acea_74668f976e0c.slice/crio-db7173fa0be8cf00ca887aaf564b28e89232c17782d2422cb5cbacc8b3cc4787 WatchSource:0}: Error finding container db7173fa0be8cf00ca887aaf564b28e89232c17782d2422cb5cbacc8b3cc4787: Status 404 returned error can't find the container with id db7173fa0be8cf00ca887aaf564b28e89232c17782d2422cb5cbacc8b3cc4787 Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.890964 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.956027 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-config-data\") pod \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.956124 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-combined-ca-bundle\") pod \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.956218 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv6rm\" (UniqueName: \"kubernetes.io/projected/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-kube-api-access-vv6rm\") pod \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.956265 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-logs\") pod \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\" (UID: \"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd\") " Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.956874 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-logs" (OuterVolumeSpecName: "logs") pod "ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" (UID: "ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.964366 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-kube-api-access-vv6rm" (OuterVolumeSpecName: "kube-api-access-vv6rm") pod "ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" (UID: "ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd"). InnerVolumeSpecName "kube-api-access-vv6rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.988257 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" (UID: "ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:40 crc kubenswrapper[4910]: I0226 22:19:40.990345 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-config-data" (OuterVolumeSpecName: "config-data") pod "ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" (UID: "ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.058413 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.058449 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.058464 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv6rm\" (UniqueName: \"kubernetes.io/projected/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-kube-api-access-vv6rm\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.058476 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.736751 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd","Type":"ContainerDied","Data":"a6f55c498cdde5b875f07b1db78a25bea871bfe98586b099d8408b425656e4c1"} Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.737274 4910 scope.go:117] "RemoveContainer" containerID="e73d553463caeeff6a90a26c65e9e7e15611c80ac8d027510cfefa131262c8e5" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.737414 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.748675 4910 generic.go:334] "Generic (PLEG): container finished" podID="74ce1646-bdb6-4532-ac1b-d6291167c9d6" containerID="670ce0521b9598d48e38c3ebba0490603beb0377e096af805db3555beb3465cd" exitCode=0 Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.748775 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74ce1646-bdb6-4532-ac1b-d6291167c9d6","Type":"ContainerDied","Data":"670ce0521b9598d48e38c3ebba0490603beb0377e096af805db3555beb3465cd"} Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.753502 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23eb0be8-742b-4fa8-acea-74668f976e0c","Type":"ContainerStarted","Data":"ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c"} Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.753539 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23eb0be8-742b-4fa8-acea-74668f976e0c","Type":"ContainerStarted","Data":"8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405"} Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.753552 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23eb0be8-742b-4fa8-acea-74668f976e0c","Type":"ContainerStarted","Data":"db7173fa0be8cf00ca887aaf564b28e89232c17782d2422cb5cbacc8b3cc4787"} Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.798099 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.808452 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.8084287679999997 podStartE2EDuration="2.808428768s" podCreationTimestamp="2026-02-26 22:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:41.789569524 +0000 UTC m=+1466.869060135" watchObservedRunningTime="2026-02-26 22:19:41.808428768 +0000 UTC m=+1466.887919309" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.830565 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.835323 4910 scope.go:117] "RemoveContainer" containerID="c04ae3135f2f30124df1eec273c2d32eda57b0d78a345850b653320b05b0c6ff" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.869287 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.877223 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-combined-ca-bundle\") pod \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.877981 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd6n5\" (UniqueName: \"kubernetes.io/projected/74ce1646-bdb6-4532-ac1b-d6291167c9d6-kube-api-access-rd6n5\") pod \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.878118 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-config-data\") pod \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\" (UID: \"74ce1646-bdb6-4532-ac1b-d6291167c9d6\") " Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.889758 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ce1646-bdb6-4532-ac1b-d6291167c9d6-kube-api-access-rd6n5" (OuterVolumeSpecName: "kube-api-access-rd6n5") pod "74ce1646-bdb6-4532-ac1b-d6291167c9d6" (UID: "74ce1646-bdb6-4532-ac1b-d6291167c9d6"). InnerVolumeSpecName "kube-api-access-rd6n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.896094 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 22:19:41 crc kubenswrapper[4910]: E0226 22:19:41.896716 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-log" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.896736 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-log" Feb 26 22:19:41 crc kubenswrapper[4910]: E0226 22:19:41.896754 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ce1646-bdb6-4532-ac1b-d6291167c9d6" containerName="nova-scheduler-scheduler" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.896763 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ce1646-bdb6-4532-ac1b-d6291167c9d6" containerName="nova-scheduler-scheduler" Feb 26 22:19:41 crc kubenswrapper[4910]: E0226 22:19:41.896790 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-api" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.896801 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-api" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.897073 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-api" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.897110 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ce1646-bdb6-4532-ac1b-d6291167c9d6" containerName="nova-scheduler-scheduler" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.897124 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" containerName="nova-api-log" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.898694 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.901690 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.917664 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-config-data" (OuterVolumeSpecName: "config-data") pod "74ce1646-bdb6-4532-ac1b-d6291167c9d6" (UID: "74ce1646-bdb6-4532-ac1b-d6291167c9d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.920608 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd" path="/var/lib/kubelet/pods/ad62ab89-d6b7-4c1f-8e4e-cfb2de5799cd/volumes" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.924148 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74ce1646-bdb6-4532-ac1b-d6291167c9d6" (UID: "74ce1646-bdb6-4532-ac1b-d6291167c9d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.927598 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.980655 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-logs\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.980834 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-config-data\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.980857 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv4hn\" (UniqueName: \"kubernetes.io/projected/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-kube-api-access-vv4hn\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.980885 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.980984 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.981000 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd6n5\" (UniqueName: \"kubernetes.io/projected/74ce1646-bdb6-4532-ac1b-d6291167c9d6-kube-api-access-rd6n5\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:41 crc kubenswrapper[4910]: I0226 22:19:41.981010 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce1646-bdb6-4532-ac1b-d6291167c9d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.082391 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-logs\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.082573 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-config-data\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.082602 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv4hn\" (UniqueName: \"kubernetes.io/projected/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-kube-api-access-vv4hn\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.082638 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.083089 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-logs\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.087219 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-config-data\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.087737 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.099122 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv4hn\" (UniqueName: \"kubernetes.io/projected/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-kube-api-access-vv4hn\") pod \"nova-api-0\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " pod="openstack/nova-api-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.221675 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.726785 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.764684 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34522f42-d10f-4ccb-b40a-ba21f7dcbe56","Type":"ContainerStarted","Data":"610d9cd13b22c3c33cfad7a465855dda163c36603eedffdbcc854c16af01d411"} Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.767806 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74ce1646-bdb6-4532-ac1b-d6291167c9d6","Type":"ContainerDied","Data":"35a5805b2b7dc8baa779126c791c2d73dc3294ff0213e6fea4e25e3026688bd2"} Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.767851 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.767868 4910 scope.go:117] "RemoveContainer" containerID="670ce0521b9598d48e38c3ebba0490603beb0377e096af805db3555beb3465cd" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.858581 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.909512 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.920554 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.922270 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.923914 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 22:19:42 crc kubenswrapper[4910]: I0226 22:19:42.928722 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.004546 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-config-data\") pod \"nova-scheduler-0\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.004617 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xcr\" (UniqueName: \"kubernetes.io/projected/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-kube-api-access-28xcr\") pod \"nova-scheduler-0\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.004655 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.106227 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-config-data\") pod \"nova-scheduler-0\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.106296 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28xcr\" (UniqueName: \"kubernetes.io/projected/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-kube-api-access-28xcr\") pod \"nova-scheduler-0\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.106321 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.109851 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.111562 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-config-data\") pod \"nova-scheduler-0\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.122782 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28xcr\" (UniqueName: \"kubernetes.io/projected/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-kube-api-access-28xcr\") pod \"nova-scheduler-0\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.185388 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.240832 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.708548 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:19:43 crc kubenswrapper[4910]: W0226 22:19:43.716712 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b754ac5_70ff_4b7c_9385_61ca219f9bb8.slice/crio-08d9ffe93405f1f37f23eccb0f0abf1c578a6475f2c2199dd9f71a245f6715fd WatchSource:0}: Error finding container 08d9ffe93405f1f37f23eccb0f0abf1c578a6475f2c2199dd9f71a245f6715fd: Status 404 returned error can't find the container with id 08d9ffe93405f1f37f23eccb0f0abf1c578a6475f2c2199dd9f71a245f6715fd Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.801884 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34522f42-d10f-4ccb-b40a-ba21f7dcbe56","Type":"ContainerStarted","Data":"69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08"} Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.802200 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34522f42-d10f-4ccb-b40a-ba21f7dcbe56","Type":"ContainerStarted","Data":"ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667"} Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.806125 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b754ac5-70ff-4b7c-9385-61ca219f9bb8","Type":"ContainerStarted","Data":"08d9ffe93405f1f37f23eccb0f0abf1c578a6475f2c2199dd9f71a245f6715fd"} Feb 26 22:19:43 crc kubenswrapper[4910]: I0226 22:19:43.928200 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74ce1646-bdb6-4532-ac1b-d6291167c9d6" path="/var/lib/kubelet/pods/74ce1646-bdb6-4532-ac1b-d6291167c9d6/volumes" Feb 26 22:19:44 crc kubenswrapper[4910]: I0226 22:19:44.834909 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b754ac5-70ff-4b7c-9385-61ca219f9bb8","Type":"ContainerStarted","Data":"27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b"} Feb 26 22:19:44 crc kubenswrapper[4910]: I0226 22:19:44.861304 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.861285794 podStartE2EDuration="2.861285794s" podCreationTimestamp="2026-02-26 22:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:44.856627907 +0000 UTC m=+1469.936118438" watchObservedRunningTime="2026-02-26 22:19:44.861285794 +0000 UTC m=+1469.940776345" Feb 26 22:19:44 crc kubenswrapper[4910]: I0226 22:19:44.867031 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.86701979 podStartE2EDuration="3.86701979s" podCreationTimestamp="2026-02-26 22:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:19:43.834269936 +0000 UTC m=+1468.913760507" watchObservedRunningTime="2026-02-26 22:19:44.86701979 +0000 UTC m=+1469.946510341" Feb 26 22:19:45 crc kubenswrapper[4910]: I0226 22:19:45.125244 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 22:19:45 crc kubenswrapper[4910]: I0226 22:19:45.125489 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 22:19:48 crc kubenswrapper[4910]: I0226 22:19:48.241362 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 22:19:48 crc kubenswrapper[4910]: I0226 22:19:48.901376 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 22:19:50 crc kubenswrapper[4910]: I0226 22:19:50.124923 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 22:19:50 crc kubenswrapper[4910]: I0226 22:19:50.125362 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 22:19:51 crc kubenswrapper[4910]: I0226 22:19:51.143467 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 22:19:51 crc kubenswrapper[4910]: I0226 22:19:51.143524 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 22:19:52 crc kubenswrapper[4910]: I0226 22:19:52.222308 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 22:19:52 crc kubenswrapper[4910]: I0226 22:19:52.222353 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 22:19:52 crc kubenswrapper[4910]: I0226 22:19:52.973577 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 22:19:52 crc kubenswrapper[4910]: I0226 22:19:52.974116 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5ef2ec31-3ae1-42ea-aaa5-5fec166df179" containerName="kube-state-metrics" containerID="cri-o://5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4" gracePeriod=30 Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.242178 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.285022 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.307516 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.230:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.307793 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.230:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.610245 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.635638 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jppcm\" (UniqueName: \"kubernetes.io/projected/5ef2ec31-3ae1-42ea-aaa5-5fec166df179-kube-api-access-jppcm\") pod \"5ef2ec31-3ae1-42ea-aaa5-5fec166df179\" (UID: \"5ef2ec31-3ae1-42ea-aaa5-5fec166df179\") " Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.656751 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef2ec31-3ae1-42ea-aaa5-5fec166df179-kube-api-access-jppcm" (OuterVolumeSpecName: "kube-api-access-jppcm") pod "5ef2ec31-3ae1-42ea-aaa5-5fec166df179" (UID: "5ef2ec31-3ae1-42ea-aaa5-5fec166df179"). InnerVolumeSpecName "kube-api-access-jppcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.737314 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jppcm\" (UniqueName: \"kubernetes.io/projected/5ef2ec31-3ae1-42ea-aaa5-5fec166df179-kube-api-access-jppcm\") on node \"crc\" DevicePath \"\"" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.945586 4910 generic.go:334] "Generic (PLEG): container finished" podID="5ef2ec31-3ae1-42ea-aaa5-5fec166df179" containerID="5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4" exitCode=2 Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.946813 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.947274 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5ef2ec31-3ae1-42ea-aaa5-5fec166df179","Type":"ContainerDied","Data":"5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4"} Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.947304 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5ef2ec31-3ae1-42ea-aaa5-5fec166df179","Type":"ContainerDied","Data":"fe54533c4c9725881f318acab1d14215d07cc52afd3806d1e042d8fa5fd6b4d1"} Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.947322 4910 scope.go:117] "RemoveContainer" containerID="5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.975625 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.987336 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.989961 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.996799 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 22:19:53 crc kubenswrapper[4910]: E0226 22:19:53.997289 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef2ec31-3ae1-42ea-aaa5-5fec166df179" containerName="kube-state-metrics" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.997304 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef2ec31-3ae1-42ea-aaa5-5fec166df179" containerName="kube-state-metrics" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.997491 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef2ec31-3ae1-42ea-aaa5-5fec166df179" containerName="kube-state-metrics" Feb 26 22:19:53 crc kubenswrapper[4910]: I0226 22:19:53.998244 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.001913 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.002042 4910 scope.go:117] "RemoveContainer" containerID="5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.002067 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 26 22:19:54 crc kubenswrapper[4910]: E0226 22:19:54.002544 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4\": container with ID starting with 5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4 not found: ID does not exist" containerID="5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.002579 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4"} err="failed to get container status \"5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4\": rpc error: code = NotFound desc = could not find container \"5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4\": container with ID starting with 5ae44ac0590ba5529626774e4f267fd4f32cd7209bce33bc0b5c02ed8b76eec4 not found: ID does not exist" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.014090 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.147791 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.147925 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjnvr\" (UniqueName: \"kubernetes.io/projected/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-kube-api-access-kjnvr\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.148008 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.148122 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.250431 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.250538 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjnvr\" (UniqueName: \"kubernetes.io/projected/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-kube-api-access-kjnvr\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.250581 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.250613 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.258928 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.265284 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.266457 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.275785 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjnvr\" (UniqueName: \"kubernetes.io/projected/a97f20f6-39c0-49bb-9f07-559e1d2b5c7f-kube-api-access-kjnvr\") pod \"kube-state-metrics-0\" (UID: \"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f\") " pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.317939 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.895753 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 22:19:54 crc kubenswrapper[4910]: I0226 22:19:54.955317 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f","Type":"ContainerStarted","Data":"d52339e74dded7889ae038f387956abaed967fbb6ef2fb14c54da12b9197bb01"} Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.057963 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.058226 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="ceilometer-central-agent" containerID="cri-o://41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97" gracePeriod=30 Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.058303 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="proxy-httpd" containerID="cri-o://d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d" gracePeriod=30 Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.058324 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="sg-core" containerID="cri-o://d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10" gracePeriod=30 Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.058374 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="ceilometer-notification-agent" containerID="cri-o://055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3" gracePeriod=30 Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.914349 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef2ec31-3ae1-42ea-aaa5-5fec166df179" path="/var/lib/kubelet/pods/5ef2ec31-3ae1-42ea-aaa5-5fec166df179/volumes" Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.966770 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a97f20f6-39c0-49bb-9f07-559e1d2b5c7f","Type":"ContainerStarted","Data":"76697eec77c41cde6f23b6a9cf3cbb148dcebd8e25e9c628754be63a19cd387c"} Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.967554 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.975010 4910 generic.go:334] "Generic (PLEG): container finished" podID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerID="d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d" exitCode=0 Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.975049 4910 generic.go:334] "Generic (PLEG): container finished" podID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerID="d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10" exitCode=2 Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.975059 4910 generic.go:334] "Generic (PLEG): container finished" podID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerID="41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97" exitCode=0 Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.975104 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerDied","Data":"d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d"} Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.975242 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerDied","Data":"d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10"} Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.975269 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerDied","Data":"41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97"} Feb 26 22:19:55 crc kubenswrapper[4910]: I0226 22:19:55.997083 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.587883299 podStartE2EDuration="2.997056274s" podCreationTimestamp="2026-02-26 22:19:53 +0000 UTC" firstStartedPulling="2026-02-26 22:19:54.897845768 +0000 UTC m=+1479.977336309" lastFinishedPulling="2026-02-26 22:19:55.307018743 +0000 UTC m=+1480.386509284" observedRunningTime="2026-02-26 22:19:55.984847492 +0000 UTC m=+1481.064338033" watchObservedRunningTime="2026-02-26 22:19:55.997056274 +0000 UTC m=+1481.076546835" Feb 26 22:19:58 crc kubenswrapper[4910]: I0226 22:19:58.568076 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="5ef2ec31-3ae1-42ea-aaa5-5fec166df179" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 22:19:59 crc kubenswrapper[4910]: I0226 22:19:59.967241 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.022331 4910 generic.go:334] "Generic (PLEG): container finished" podID="57fd7488-e497-4abf-8875-bacde64f7cc3" containerID="18d69e7098ec56c72b16ab1ed1eb56be02ee031957db05722e7bfea4291dcc62" exitCode=137 Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.022388 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57fd7488-e497-4abf-8875-bacde64f7cc3","Type":"ContainerDied","Data":"18d69e7098ec56c72b16ab1ed1eb56be02ee031957db05722e7bfea4291dcc62"} Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.022414 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57fd7488-e497-4abf-8875-bacde64f7cc3","Type":"ContainerDied","Data":"be1cfb18c6528729bad1cae3c85da34c5eccb842c356ac3b1e12bcf975820818"} Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.022423 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1cfb18c6528729bad1cae3c85da34c5eccb842c356ac3b1e12bcf975820818" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.024416 4910 generic.go:334] "Generic (PLEG): container finished" podID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerID="055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3" exitCode=0 Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.024445 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerDied","Data":"055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3"} Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.024466 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fcb104e-6cec-4785-97a5-4afbbc1ff73b","Type":"ContainerDied","Data":"fc15eca9c074ad632f0ca06cc16afc8b4ef0b89307ca823ac4a0cb593677a12a"} Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.024483 4910 scope.go:117] "RemoveContainer" containerID="d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.024827 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.042629 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.091112 4910 scope.go:117] "RemoveContainer" containerID="d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.099316 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-run-httpd\") pod \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.099460 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-sg-core-conf-yaml\") pod \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.099521 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjmtp\" (UniqueName: \"kubernetes.io/projected/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-kube-api-access-sjmtp\") pod \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.099545 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-combined-ca-bundle\") pod \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.099571 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-scripts\") pod \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.099635 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-log-httpd\") pod \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.099709 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-config-data\") pod \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\" (UID: \"6fcb104e-6cec-4785-97a5-4afbbc1ff73b\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.104839 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-kube-api-access-sjmtp" (OuterVolumeSpecName: "kube-api-access-sjmtp") pod "6fcb104e-6cec-4785-97a5-4afbbc1ff73b" (UID: "6fcb104e-6cec-4785-97a5-4afbbc1ff73b"). InnerVolumeSpecName "kube-api-access-sjmtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.109343 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6fcb104e-6cec-4785-97a5-4afbbc1ff73b" (UID: "6fcb104e-6cec-4785-97a5-4afbbc1ff73b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.137015 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-scripts" (OuterVolumeSpecName: "scripts") pod "6fcb104e-6cec-4785-97a5-4afbbc1ff73b" (UID: "6fcb104e-6cec-4785-97a5-4afbbc1ff73b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.142511 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6fcb104e-6cec-4785-97a5-4afbbc1ff73b" (UID: "6fcb104e-6cec-4785-97a5-4afbbc1ff73b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.143218 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.145807 4910 scope.go:117] "RemoveContainer" containerID="055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.149666 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535740-7mkh8"] Feb 26 22:20:00 crc kubenswrapper[4910]: E0226 22:20:00.150124 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="sg-core" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.150139 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="sg-core" Feb 26 22:20:00 crc kubenswrapper[4910]: E0226 22:20:00.155264 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="proxy-httpd" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.155290 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="proxy-httpd" Feb 26 22:20:00 crc kubenswrapper[4910]: E0226 22:20:00.155344 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="ceilometer-central-agent" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.155350 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="ceilometer-central-agent" Feb 26 22:20:00 crc kubenswrapper[4910]: E0226 22:20:00.155368 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="ceilometer-notification-agent" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.155374 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="ceilometer-notification-agent" Feb 26 22:20:00 crc kubenswrapper[4910]: E0226 22:20:00.155391 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fd7488-e497-4abf-8875-bacde64f7cc3" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.155397 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fd7488-e497-4abf-8875-bacde64f7cc3" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.155760 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fd7488-e497-4abf-8875-bacde64f7cc3" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.155783 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="ceilometer-notification-agent" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.155793 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="proxy-httpd" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.155807 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="ceilometer-central-agent" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.155816 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" containerName="sg-core" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.156511 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.156560 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.156626 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535740-7mkh8" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.158105 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.158315 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.158414 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.177851 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535740-7mkh8"] Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.196910 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fcb104e-6cec-4785-97a5-4afbbc1ff73b" (UID: "6fcb104e-6cec-4785-97a5-4afbbc1ff73b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.203330 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7zcd\" (UniqueName: \"kubernetes.io/projected/57fd7488-e497-4abf-8875-bacde64f7cc3-kube-api-access-l7zcd\") pod \"57fd7488-e497-4abf-8875-bacde64f7cc3\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.203880 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-combined-ca-bundle\") pod \"57fd7488-e497-4abf-8875-bacde64f7cc3\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.203954 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-config-data\") pod \"57fd7488-e497-4abf-8875-bacde64f7cc3\" (UID: \"57fd7488-e497-4abf-8875-bacde64f7cc3\") " Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.204467 4910 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.204486 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjmtp\" (UniqueName: \"kubernetes.io/projected/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-kube-api-access-sjmtp\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.204496 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.204504 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.204513 4910 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.208500 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6fcb104e-6cec-4785-97a5-4afbbc1ff73b" (UID: "6fcb104e-6cec-4785-97a5-4afbbc1ff73b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.208855 4910 scope.go:117] "RemoveContainer" containerID="41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.213725 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57fd7488-e497-4abf-8875-bacde64f7cc3-kube-api-access-l7zcd" (OuterVolumeSpecName: "kube-api-access-l7zcd") pod "57fd7488-e497-4abf-8875-bacde64f7cc3" (UID: "57fd7488-e497-4abf-8875-bacde64f7cc3"). InnerVolumeSpecName "kube-api-access-l7zcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.240641 4910 scope.go:117] "RemoveContainer" containerID="d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d" Feb 26 22:20:00 crc kubenswrapper[4910]: E0226 22:20:00.243067 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d\": container with ID starting with d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d not found: ID does not exist" containerID="d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.243102 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d"} err="failed to get container status \"d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d\": rpc error: code = NotFound desc = could not find container \"d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d\": container with ID starting with d34db7ca3c529928fc1e6c9557a71f4d0ab65834678cdd36b038f6731fe3fd1d not found: ID does not exist" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.243124 4910 scope.go:117] "RemoveContainer" containerID="d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10" Feb 26 22:20:00 crc kubenswrapper[4910]: E0226 22:20:00.243619 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10\": container with ID starting with d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10 not found: ID does not exist" containerID="d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.243733 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10"} err="failed to get container status \"d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10\": rpc error: code = NotFound desc = could not find container \"d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10\": container with ID starting with d40a0656d9c5c7974813da63872c8ab899d1df1686de72819f74d777ef282b10 not found: ID does not exist" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.244139 4910 scope.go:117] "RemoveContainer" containerID="055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3" Feb 26 22:20:00 crc kubenswrapper[4910]: E0226 22:20:00.244811 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3\": container with ID starting with 055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3 not found: ID does not exist" containerID="055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.244842 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3"} err="failed to get container status \"055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3\": rpc error: code = NotFound desc = could not find container \"055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3\": container with ID starting with 055f3ecd53c3fe72cdcddaff56b6698ca51aaf8b3201aa96c28f0c6037c6e0a3 not found: ID does not exist" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.244861 4910 scope.go:117] "RemoveContainer" containerID="41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97" Feb 26 22:20:00 crc kubenswrapper[4910]: E0226 22:20:00.245416 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97\": container with ID starting with 41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97 not found: ID does not exist" containerID="41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.245441 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97"} err="failed to get container status \"41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97\": rpc error: code = NotFound desc = could not find container \"41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97\": container with ID starting with 41f93ec9dc76644e7e787e094bcd639370ea663f261abf532fa6fa24f6118f97 not found: ID does not exist" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.246724 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57fd7488-e497-4abf-8875-bacde64f7cc3" (UID: "57fd7488-e497-4abf-8875-bacde64f7cc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.254590 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-config-data" (OuterVolumeSpecName: "config-data") pod "57fd7488-e497-4abf-8875-bacde64f7cc3" (UID: "57fd7488-e497-4abf-8875-bacde64f7cc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.283065 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-config-data" (OuterVolumeSpecName: "config-data") pod "6fcb104e-6cec-4785-97a5-4afbbc1ff73b" (UID: "6fcb104e-6cec-4785-97a5-4afbbc1ff73b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.306444 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn6sj\" (UniqueName: \"kubernetes.io/projected/01a76a37-081d-4322-afc4-a3ba75ebfabe-kube-api-access-wn6sj\") pod \"auto-csr-approver-29535740-7mkh8\" (UID: \"01a76a37-081d-4322-afc4-a3ba75ebfabe\") " pod="openshift-infra/auto-csr-approver-29535740-7mkh8" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.307084 4910 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.307103 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.307113 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fd7488-e497-4abf-8875-bacde64f7cc3-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.307123 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7zcd\" (UniqueName: \"kubernetes.io/projected/57fd7488-e497-4abf-8875-bacde64f7cc3-kube-api-access-l7zcd\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.307175 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcb104e-6cec-4785-97a5-4afbbc1ff73b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.356614 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.366373 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.378243 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.381764 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.389898 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.390118 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.390384 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.392239 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.408371 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn6sj\" (UniqueName: \"kubernetes.io/projected/01a76a37-081d-4322-afc4-a3ba75ebfabe-kube-api-access-wn6sj\") pod \"auto-csr-approver-29535740-7mkh8\" (UID: \"01a76a37-081d-4322-afc4-a3ba75ebfabe\") " pod="openshift-infra/auto-csr-approver-29535740-7mkh8" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.425946 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn6sj\" (UniqueName: \"kubernetes.io/projected/01a76a37-081d-4322-afc4-a3ba75ebfabe-kube-api-access-wn6sj\") pod \"auto-csr-approver-29535740-7mkh8\" (UID: \"01a76a37-081d-4322-afc4-a3ba75ebfabe\") " pod="openshift-infra/auto-csr-approver-29535740-7mkh8" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.497997 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535740-7mkh8" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.510639 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-run-httpd\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.510762 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mwvn\" (UniqueName: \"kubernetes.io/projected/2eae4569-bd3d-4d00-ae67-75446c319be0-kube-api-access-8mwvn\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.510805 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.510840 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-log-httpd\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.511196 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-config-data\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.511312 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.511369 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.511613 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-scripts\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.613453 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mwvn\" (UniqueName: \"kubernetes.io/projected/2eae4569-bd3d-4d00-ae67-75446c319be0-kube-api-access-8mwvn\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.614029 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.614052 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-log-httpd\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.614508 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-config-data\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.614631 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.614791 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.616650 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-log-httpd\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.619674 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-config-data\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.619714 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-scripts\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.620039 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.620292 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-run-httpd\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.620810 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-run-httpd\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.624926 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-scripts\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.627626 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.631319 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.652295 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mwvn\" (UniqueName: \"kubernetes.io/projected/2eae4569-bd3d-4d00-ae67-75446c319be0-kube-api-access-8mwvn\") pod \"ceilometer-0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " pod="openstack/ceilometer-0" Feb 26 22:20:00 crc kubenswrapper[4910]: I0226 22:20:00.704227 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.045349 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.084738 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535740-7mkh8"] Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.115315 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.134106 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.154121 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.165969 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.180330 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.180439 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.189404 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.193004 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.207185 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 26 22:20:01 crc kubenswrapper[4910]: W0226 22:20:01.215793 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eae4569_bd3d_4d00_ae67_75446c319be0.slice/crio-6bb04de811179fb64abd0ebdef1c7f400ade71de611638aeeb82d03b8f9e36c9 WatchSource:0}: Error finding container 6bb04de811179fb64abd0ebdef1c7f400ade71de611638aeeb82d03b8f9e36c9: Status 404 returned error can't find the container with id 6bb04de811179fb64abd0ebdef1c7f400ade71de611638aeeb82d03b8f9e36c9 Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.220151 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.337479 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhcw\" (UniqueName: \"kubernetes.io/projected/26e818e1-e33c-4bfe-b133-36ca523fd741-kube-api-access-lfhcw\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.337747 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.337796 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.337867 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.337899 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.439256 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.439403 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfhcw\" (UniqueName: \"kubernetes.io/projected/26e818e1-e33c-4bfe-b133-36ca523fd741-kube-api-access-lfhcw\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.439437 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.439476 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.440260 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.444736 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.444738 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.446545 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.446886 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e818e1-e33c-4bfe-b133-36ca523fd741-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.454472 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfhcw\" (UniqueName: \"kubernetes.io/projected/26e818e1-e33c-4bfe-b133-36ca523fd741-kube-api-access-lfhcw\") pod \"nova-cell1-novncproxy-0\" (UID: \"26e818e1-e33c-4bfe-b133-36ca523fd741\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.519876 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.916786 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57fd7488-e497-4abf-8875-bacde64f7cc3" path="/var/lib/kubelet/pods/57fd7488-e497-4abf-8875-bacde64f7cc3/volumes" Feb 26 22:20:01 crc kubenswrapper[4910]: I0226 22:20:01.918077 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fcb104e-6cec-4785-97a5-4afbbc1ff73b" path="/var/lib/kubelet/pods/6fcb104e-6cec-4785-97a5-4afbbc1ff73b/volumes" Feb 26 22:20:02 crc kubenswrapper[4910]: I0226 22:20:02.015601 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 22:20:02 crc kubenswrapper[4910]: W0226 22:20:02.017153 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26e818e1_e33c_4bfe_b133_36ca523fd741.slice/crio-97131c186c02ef2be0831fb79e67749d33b62f5196ee231524db30c5ad754cd5 WatchSource:0}: Error finding container 97131c186c02ef2be0831fb79e67749d33b62f5196ee231524db30c5ad754cd5: Status 404 returned error can't find the container with id 97131c186c02ef2be0831fb79e67749d33b62f5196ee231524db30c5ad754cd5 Feb 26 22:20:02 crc kubenswrapper[4910]: I0226 22:20:02.056519 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535740-7mkh8" event={"ID":"01a76a37-081d-4322-afc4-a3ba75ebfabe","Type":"ContainerStarted","Data":"61532a2fc620ac388f6eb538db125b4fd033168265b3edfc6a1e7df881551ef5"} Feb 26 22:20:02 crc kubenswrapper[4910]: I0226 22:20:02.059255 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerStarted","Data":"6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2"} Feb 26 22:20:02 crc kubenswrapper[4910]: I0226 22:20:02.059286 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerStarted","Data":"6bb04de811179fb64abd0ebdef1c7f400ade71de611638aeeb82d03b8f9e36c9"} Feb 26 22:20:02 crc kubenswrapper[4910]: I0226 22:20:02.062117 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26e818e1-e33c-4bfe-b133-36ca523fd741","Type":"ContainerStarted","Data":"97131c186c02ef2be0831fb79e67749d33b62f5196ee231524db30c5ad754cd5"} Feb 26 22:20:02 crc kubenswrapper[4910]: I0226 22:20:02.233749 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 22:20:02 crc kubenswrapper[4910]: I0226 22:20:02.234318 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 22:20:02 crc kubenswrapper[4910]: I0226 22:20:02.234850 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 22:20:02 crc kubenswrapper[4910]: I0226 22:20:02.247418 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.070465 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26e818e1-e33c-4bfe-b133-36ca523fd741","Type":"ContainerStarted","Data":"dab8faea2c70684576e164ce8cb8b0570d27cf79d39d875ab384e827014adef6"} Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.073784 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535740-7mkh8" event={"ID":"01a76a37-081d-4322-afc4-a3ba75ebfabe","Type":"ContainerStarted","Data":"5e23c150362cd9486c9c8fcf05005bcc3c8f2600335c5f082bd1598985a6cfac"} Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.077044 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerStarted","Data":"ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d"} Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.077437 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.081247 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.105021 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.105000779 podStartE2EDuration="2.105000779s" podCreationTimestamp="2026-02-26 22:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:20:03.101043331 +0000 UTC m=+1488.180533872" watchObservedRunningTime="2026-02-26 22:20:03.105000779 +0000 UTC m=+1488.184491330" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.154966 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535740-7mkh8" podStartSLOduration=2.013525935 podStartE2EDuration="3.154943531s" podCreationTimestamp="2026-02-26 22:20:00 +0000 UTC" firstStartedPulling="2026-02-26 22:20:01.084341613 +0000 UTC m=+1486.163832154" lastFinishedPulling="2026-02-26 22:20:02.225759209 +0000 UTC m=+1487.305249750" observedRunningTime="2026-02-26 22:20:03.143728205 +0000 UTC m=+1488.223218786" watchObservedRunningTime="2026-02-26 22:20:03.154943531 +0000 UTC m=+1488.234434072" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.315186 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dd998c-krxqd"] Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.317143 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.332122 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-krxqd"] Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.383251 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.383297 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-svc\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.383318 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-config\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.383791 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmmh\" (UniqueName: \"kubernetes.io/projected/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-kube-api-access-vsmmh\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.384020 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.384047 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.486228 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.486290 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-svc\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.486326 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-config\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.486354 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsmmh\" (UniqueName: \"kubernetes.io/projected/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-kube-api-access-vsmmh\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.486474 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.486500 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.487198 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.487260 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.487278 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-svc\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.487737 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-config\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.487746 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.516004 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsmmh\" (UniqueName: \"kubernetes.io/projected/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-kube-api-access-vsmmh\") pod \"dnsmasq-dns-54dd998c-krxqd\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:03 crc kubenswrapper[4910]: I0226 22:20:03.648071 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:04 crc kubenswrapper[4910]: I0226 22:20:04.087319 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerStarted","Data":"9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0"} Feb 26 22:20:04 crc kubenswrapper[4910]: I0226 22:20:04.089096 4910 generic.go:334] "Generic (PLEG): container finished" podID="01a76a37-081d-4322-afc4-a3ba75ebfabe" containerID="5e23c150362cd9486c9c8fcf05005bcc3c8f2600335c5f082bd1598985a6cfac" exitCode=0 Feb 26 22:20:04 crc kubenswrapper[4910]: I0226 22:20:04.089201 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535740-7mkh8" event={"ID":"01a76a37-081d-4322-afc4-a3ba75ebfabe","Type":"ContainerDied","Data":"5e23c150362cd9486c9c8fcf05005bcc3c8f2600335c5f082bd1598985a6cfac"} Feb 26 22:20:04 crc kubenswrapper[4910]: W0226 22:20:04.209823 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0da5d2b0_ad22_4e31_8b86_50314d9a58e5.slice/crio-05b7d915a1f192c47e9764736356ca699d837fa8eb5496b7094bf07bcb482bfb WatchSource:0}: Error finding container 05b7d915a1f192c47e9764736356ca699d837fa8eb5496b7094bf07bcb482bfb: Status 404 returned error can't find the container with id 05b7d915a1f192c47e9764736356ca699d837fa8eb5496b7094bf07bcb482bfb Feb 26 22:20:04 crc kubenswrapper[4910]: I0226 22:20:04.221004 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-krxqd"] Feb 26 22:20:04 crc kubenswrapper[4910]: I0226 22:20:04.351884 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 22:20:05 crc kubenswrapper[4910]: I0226 22:20:05.099584 4910 generic.go:334] "Generic (PLEG): container finished" podID="0da5d2b0-ad22-4e31-8b86-50314d9a58e5" containerID="3c7cb016847185639a54181751c9db96c494b458d740cc801895794da62c7eb4" exitCode=0 Feb 26 22:20:05 crc kubenswrapper[4910]: I0226 22:20:05.099663 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-krxqd" event={"ID":"0da5d2b0-ad22-4e31-8b86-50314d9a58e5","Type":"ContainerDied","Data":"3c7cb016847185639a54181751c9db96c494b458d740cc801895794da62c7eb4"} Feb 26 22:20:05 crc kubenswrapper[4910]: I0226 22:20:05.100300 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-krxqd" event={"ID":"0da5d2b0-ad22-4e31-8b86-50314d9a58e5","Type":"ContainerStarted","Data":"05b7d915a1f192c47e9764736356ca699d837fa8eb5496b7094bf07bcb482bfb"} Feb 26 22:20:05 crc kubenswrapper[4910]: I0226 22:20:05.582865 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535740-7mkh8" Feb 26 22:20:05 crc kubenswrapper[4910]: I0226 22:20:05.645302 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn6sj\" (UniqueName: \"kubernetes.io/projected/01a76a37-081d-4322-afc4-a3ba75ebfabe-kube-api-access-wn6sj\") pod \"01a76a37-081d-4322-afc4-a3ba75ebfabe\" (UID: \"01a76a37-081d-4322-afc4-a3ba75ebfabe\") " Feb 26 22:20:05 crc kubenswrapper[4910]: I0226 22:20:05.658326 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a76a37-081d-4322-afc4-a3ba75ebfabe-kube-api-access-wn6sj" (OuterVolumeSpecName: "kube-api-access-wn6sj") pod "01a76a37-081d-4322-afc4-a3ba75ebfabe" (UID: "01a76a37-081d-4322-afc4-a3ba75ebfabe"). InnerVolumeSpecName "kube-api-access-wn6sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:05 crc kubenswrapper[4910]: I0226 22:20:05.747733 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn6sj\" (UniqueName: \"kubernetes.io/projected/01a76a37-081d-4322-afc4-a3ba75ebfabe-kube-api-access-wn6sj\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:05 crc kubenswrapper[4910]: I0226 22:20:05.775713 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.113128 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-krxqd" event={"ID":"0da5d2b0-ad22-4e31-8b86-50314d9a58e5","Type":"ContainerStarted","Data":"7d6cf3902342d809dd31797bac884bf38c3e64e5ea5d2a63f804de4388e997f9"} Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.113232 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.115464 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535740-7mkh8" event={"ID":"01a76a37-081d-4322-afc4-a3ba75ebfabe","Type":"ContainerDied","Data":"61532a2fc620ac388f6eb538db125b4fd033168265b3edfc6a1e7df881551ef5"} Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.115753 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61532a2fc620ac388f6eb538db125b4fd033168265b3edfc6a1e7df881551ef5" Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.115491 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535740-7mkh8" Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.118432 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerStarted","Data":"6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758"} Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.118560 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-log" containerID="cri-o://ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667" gracePeriod=30 Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.118606 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-api" containerID="cri-o://69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08" gracePeriod=30 Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.142707 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dd998c-krxqd" podStartSLOduration=3.142689382 podStartE2EDuration="3.142689382s" podCreationTimestamp="2026-02-26 22:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:20:06.136241496 +0000 UTC m=+1491.215732037" watchObservedRunningTime="2026-02-26 22:20:06.142689382 +0000 UTC m=+1491.222179923" Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.161199 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.462353229 podStartE2EDuration="6.161183666s" podCreationTimestamp="2026-02-26 22:20:00 +0000 UTC" firstStartedPulling="2026-02-26 22:20:01.218091338 +0000 UTC m=+1486.297581879" lastFinishedPulling="2026-02-26 22:20:04.916921775 +0000 UTC m=+1489.996412316" observedRunningTime="2026-02-26 22:20:06.15950291 +0000 UTC m=+1491.238993451" watchObservedRunningTime="2026-02-26 22:20:06.161183666 +0000 UTC m=+1491.240674207" Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.521099 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.668878 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535734-smxt4"] Feb 26 22:20:06 crc kubenswrapper[4910]: I0226 22:20:06.681290 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535734-smxt4"] Feb 26 22:20:07 crc kubenswrapper[4910]: I0226 22:20:07.130293 4910 generic.go:334] "Generic (PLEG): container finished" podID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerID="ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667" exitCode=143 Feb 26 22:20:07 crc kubenswrapper[4910]: I0226 22:20:07.130368 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34522f42-d10f-4ccb-b40a-ba21f7dcbe56","Type":"ContainerDied","Data":"ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667"} Feb 26 22:20:07 crc kubenswrapper[4910]: I0226 22:20:07.130838 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 22:20:07 crc kubenswrapper[4910]: I0226 22:20:07.710103 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:07 crc kubenswrapper[4910]: I0226 22:20:07.917337 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52b70ce-11c2-4ff2-a4da-18f7f20d6712" path="/var/lib/kubelet/pods/e52b70ce-11c2-4ff2-a4da-18f7f20d6712/volumes" Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.161917 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="sg-core" containerID="cri-o://9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0" gracePeriod=30 Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.161962 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="ceilometer-notification-agent" containerID="cri-o://ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d" gracePeriod=30 Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.162066 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="proxy-httpd" containerID="cri-o://6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758" gracePeriod=30 Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.162141 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="ceilometer-central-agent" containerID="cri-o://6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2" gracePeriod=30 Feb 26 22:20:09 crc kubenswrapper[4910]: E0226 22:20:09.672642 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eae4569_bd3d_4d00_ae67_75446c319be0.slice/crio-conmon-6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.821597 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.968949 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-config-data\") pod \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.969338 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv4hn\" (UniqueName: \"kubernetes.io/projected/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-kube-api-access-vv4hn\") pod \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.969464 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-logs\") pod \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.969590 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-combined-ca-bundle\") pod \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\" (UID: \"34522f42-d10f-4ccb-b40a-ba21f7dcbe56\") " Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.972883 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-logs" (OuterVolumeSpecName: "logs") pod "34522f42-d10f-4ccb-b40a-ba21f7dcbe56" (UID: "34522f42-d10f-4ccb-b40a-ba21f7dcbe56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:20:09 crc kubenswrapper[4910]: I0226 22:20:09.986394 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-kube-api-access-vv4hn" (OuterVolumeSpecName: "kube-api-access-vv4hn") pod "34522f42-d10f-4ccb-b40a-ba21f7dcbe56" (UID: "34522f42-d10f-4ccb-b40a-ba21f7dcbe56"). InnerVolumeSpecName "kube-api-access-vv4hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.000086 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-config-data" (OuterVolumeSpecName: "config-data") pod "34522f42-d10f-4ccb-b40a-ba21f7dcbe56" (UID: "34522f42-d10f-4ccb-b40a-ba21f7dcbe56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.018000 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34522f42-d10f-4ccb-b40a-ba21f7dcbe56" (UID: "34522f42-d10f-4ccb-b40a-ba21f7dcbe56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.071983 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.072048 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.072061 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.072072 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv4hn\" (UniqueName: \"kubernetes.io/projected/34522f42-d10f-4ccb-b40a-ba21f7dcbe56-kube-api-access-vv4hn\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.172208 4910 generic.go:334] "Generic (PLEG): container finished" podID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerID="69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08" exitCode=0 Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.172264 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34522f42-d10f-4ccb-b40a-ba21f7dcbe56","Type":"ContainerDied","Data":"69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08"} Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.172287 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.172317 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34522f42-d10f-4ccb-b40a-ba21f7dcbe56","Type":"ContainerDied","Data":"610d9cd13b22c3c33cfad7a465855dda163c36603eedffdbcc854c16af01d411"} Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.172334 4910 scope.go:117] "RemoveContainer" containerID="69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.174823 4910 generic.go:334] "Generic (PLEG): container finished" podID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerID="6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758" exitCode=0 Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.174849 4910 generic.go:334] "Generic (PLEG): container finished" podID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerID="9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0" exitCode=2 Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.174860 4910 generic.go:334] "Generic (PLEG): container finished" podID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerID="ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d" exitCode=0 Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.174853 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerDied","Data":"6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758"} Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.174893 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerDied","Data":"9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0"} Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.174937 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerDied","Data":"ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d"} Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.210478 4910 scope.go:117] "RemoveContainer" containerID="ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.214793 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.234238 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.251593 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:10 crc kubenswrapper[4910]: E0226 22:20:10.252075 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-api" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.252093 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-api" Feb 26 22:20:10 crc kubenswrapper[4910]: E0226 22:20:10.252122 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-log" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.252131 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-log" Feb 26 22:20:10 crc kubenswrapper[4910]: E0226 22:20:10.252152 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a76a37-081d-4322-afc4-a3ba75ebfabe" containerName="oc" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.252172 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a76a37-081d-4322-afc4-a3ba75ebfabe" containerName="oc" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.252365 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a76a37-081d-4322-afc4-a3ba75ebfabe" containerName="oc" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.252376 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-log" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.252389 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" containerName="nova-api-api" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.253434 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.254896 4910 scope.go:117] "RemoveContainer" containerID="69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08" Feb 26 22:20:10 crc kubenswrapper[4910]: E0226 22:20:10.258478 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08\": container with ID starting with 69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08 not found: ID does not exist" containerID="69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.258598 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08"} err="failed to get container status \"69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08\": rpc error: code = NotFound desc = could not find container \"69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08\": container with ID starting with 69ad1fdc354bb92780a94dc0299288bc83a02ad597f17e4ce2bbb925a1357c08 not found: ID does not exist" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.258678 4910 scope.go:117] "RemoveContainer" containerID="ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667" Feb 26 22:20:10 crc kubenswrapper[4910]: E0226 22:20:10.259039 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667\": container with ID starting with ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667 not found: ID does not exist" containerID="ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.259117 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667"} err="failed to get container status \"ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667\": rpc error: code = NotFound desc = could not find container \"ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667\": container with ID starting with ea8134f6a99d5e39df1150c2f59047e80775fc042f92f6605601a3560faf0667 not found: ID does not exist" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.261199 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.266078 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.266295 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.266433 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.376972 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.377285 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-config-data\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.377346 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d37d540-f061-4881-9a00-3b0e0cfa1d47-logs\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.377404 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxqc9\" (UniqueName: \"kubernetes.io/projected/3d37d540-f061-4881-9a00-3b0e0cfa1d47-kube-api-access-zxqc9\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.377500 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.377527 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.480612 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.480676 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.480807 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.480847 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-config-data\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.480920 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d37d540-f061-4881-9a00-3b0e0cfa1d47-logs\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.480967 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxqc9\" (UniqueName: \"kubernetes.io/projected/3d37d540-f061-4881-9a00-3b0e0cfa1d47-kube-api-access-zxqc9\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.481514 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d37d540-f061-4881-9a00-3b0e0cfa1d47-logs\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.487980 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.491533 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-config-data\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.491998 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.493879 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.507127 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxqc9\" (UniqueName: \"kubernetes.io/projected/3d37d540-f061-4881-9a00-3b0e0cfa1d47-kube-api-access-zxqc9\") pod \"nova-api-0\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.634963 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:20:10 crc kubenswrapper[4910]: I0226 22:20:10.893659 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.007574 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-config-data\") pod \"2eae4569-bd3d-4d00-ae67-75446c319be0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.007650 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-ceilometer-tls-certs\") pod \"2eae4569-bd3d-4d00-ae67-75446c319be0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.007697 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-run-httpd\") pod \"2eae4569-bd3d-4d00-ae67-75446c319be0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.007820 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mwvn\" (UniqueName: \"kubernetes.io/projected/2eae4569-bd3d-4d00-ae67-75446c319be0-kube-api-access-8mwvn\") pod \"2eae4569-bd3d-4d00-ae67-75446c319be0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.007960 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-sg-core-conf-yaml\") pod \"2eae4569-bd3d-4d00-ae67-75446c319be0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.008022 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-log-httpd\") pod \"2eae4569-bd3d-4d00-ae67-75446c319be0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.008053 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-combined-ca-bundle\") pod \"2eae4569-bd3d-4d00-ae67-75446c319be0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.008087 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-scripts\") pod \"2eae4569-bd3d-4d00-ae67-75446c319be0\" (UID: \"2eae4569-bd3d-4d00-ae67-75446c319be0\") " Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.008390 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2eae4569-bd3d-4d00-ae67-75446c319be0" (UID: "2eae4569-bd3d-4d00-ae67-75446c319be0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.008480 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2eae4569-bd3d-4d00-ae67-75446c319be0" (UID: "2eae4569-bd3d-4d00-ae67-75446c319be0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.009011 4910 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.009035 4910 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2eae4569-bd3d-4d00-ae67-75446c319be0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.011924 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eae4569-bd3d-4d00-ae67-75446c319be0-kube-api-access-8mwvn" (OuterVolumeSpecName: "kube-api-access-8mwvn") pod "2eae4569-bd3d-4d00-ae67-75446c319be0" (UID: "2eae4569-bd3d-4d00-ae67-75446c319be0"). InnerVolumeSpecName "kube-api-access-8mwvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.012440 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-scripts" (OuterVolumeSpecName: "scripts") pod "2eae4569-bd3d-4d00-ae67-75446c319be0" (UID: "2eae4569-bd3d-4d00-ae67-75446c319be0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.037355 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2eae4569-bd3d-4d00-ae67-75446c319be0" (UID: "2eae4569-bd3d-4d00-ae67-75446c319be0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.060911 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2eae4569-bd3d-4d00-ae67-75446c319be0" (UID: "2eae4569-bd3d-4d00-ae67-75446c319be0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.104223 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eae4569-bd3d-4d00-ae67-75446c319be0" (UID: "2eae4569-bd3d-4d00-ae67-75446c319be0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.111176 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mwvn\" (UniqueName: \"kubernetes.io/projected/2eae4569-bd3d-4d00-ae67-75446c319be0-kube-api-access-8mwvn\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.111216 4910 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.111229 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.111241 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.111254 4910 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.119601 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-config-data" (OuterVolumeSpecName: "config-data") pod "2eae4569-bd3d-4d00-ae67-75446c319be0" (UID: "2eae4569-bd3d-4d00-ae67-75446c319be0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.133529 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.195884 4910 generic.go:334] "Generic (PLEG): container finished" podID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerID="6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2" exitCode=0 Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.196048 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerDied","Data":"6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2"} Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.196824 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2eae4569-bd3d-4d00-ae67-75446c319be0","Type":"ContainerDied","Data":"6bb04de811179fb64abd0ebdef1c7f400ade71de611638aeeb82d03b8f9e36c9"} Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.196854 4910 scope.go:117] "RemoveContainer" containerID="6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.196103 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.201083 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d37d540-f061-4881-9a00-3b0e0cfa1d47","Type":"ContainerStarted","Data":"509ff403703918a3590f5bdf48e5fe953b683305800948e29781acd59bf707d5"} Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.228324 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eae4569-bd3d-4d00-ae67-75446c319be0-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.247455 4910 scope.go:117] "RemoveContainer" containerID="9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.247490 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.260602 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.320897 4910 scope.go:117] "RemoveContainer" containerID="ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.332361 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:11 crc kubenswrapper[4910]: E0226 22:20:11.333284 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="proxy-httpd" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.333305 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="proxy-httpd" Feb 26 22:20:11 crc kubenswrapper[4910]: E0226 22:20:11.333321 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="sg-core" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.333326 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="sg-core" Feb 26 22:20:11 crc kubenswrapper[4910]: E0226 22:20:11.333339 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="ceilometer-central-agent" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.333345 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="ceilometer-central-agent" Feb 26 22:20:11 crc kubenswrapper[4910]: E0226 22:20:11.333370 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="ceilometer-notification-agent" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.333375 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="ceilometer-notification-agent" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.333564 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="ceilometer-notification-agent" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.333585 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="proxy-httpd" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.333596 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="sg-core" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.333607 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" containerName="ceilometer-central-agent" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.336960 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.340306 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.340365 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.340381 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.348675 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.520515 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.537606 4910 scope.go:117] "RemoveContainer" containerID="6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.538996 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.539141 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.539489 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkchw\" (UniqueName: \"kubernetes.io/projected/c2d9a088-c1a8-4505-a63c-cf1462097e73-kube-api-access-rkchw\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.539536 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-run-httpd\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.539636 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-config-data\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.539699 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-log-httpd\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.539907 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.539982 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-scripts\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.540043 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.568375 4910 scope.go:117] "RemoveContainer" containerID="6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758" Feb 26 22:20:11 crc kubenswrapper[4910]: E0226 22:20:11.570912 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758\": container with ID starting with 6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758 not found: ID does not exist" containerID="6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.570948 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758"} err="failed to get container status \"6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758\": rpc error: code = NotFound desc = could not find container \"6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758\": container with ID starting with 6f9f1a379d0ce61bd96d4fd494ab2a748daa494c28114dde6b3be8cf54550758 not found: ID does not exist" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.570975 4910 scope.go:117] "RemoveContainer" containerID="9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0" Feb 26 22:20:11 crc kubenswrapper[4910]: E0226 22:20:11.571282 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0\": container with ID starting with 9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0 not found: ID does not exist" containerID="9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.571310 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0"} err="failed to get container status \"9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0\": rpc error: code = NotFound desc = could not find container \"9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0\": container with ID starting with 9639da002add8747bd2e05c4fd6da993a6f25fa9fd3595ef2cd17d6eabc86af0 not found: ID does not exist" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.571328 4910 scope.go:117] "RemoveContainer" containerID="ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d" Feb 26 22:20:11 crc kubenswrapper[4910]: E0226 22:20:11.571666 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d\": container with ID starting with ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d not found: ID does not exist" containerID="ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.571716 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d"} err="failed to get container status \"ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d\": rpc error: code = NotFound desc = could not find container \"ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d\": container with ID starting with ba8a172de67f24cc1f883c4e324da1e6540c184306a7784fbfc7d3d72630d21d not found: ID does not exist" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.571764 4910 scope.go:117] "RemoveContainer" containerID="6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2" Feb 26 22:20:11 crc kubenswrapper[4910]: E0226 22:20:11.572090 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2\": container with ID starting with 6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2 not found: ID does not exist" containerID="6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.572117 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2"} err="failed to get container status \"6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2\": rpc error: code = NotFound desc = could not find container \"6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2\": container with ID starting with 6125af00ed7afae6b9dd6f33e648f2663a127718e201a6959793b0a218770ef2 not found: ID does not exist" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.642629 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkchw\" (UniqueName: \"kubernetes.io/projected/c2d9a088-c1a8-4505-a63c-cf1462097e73-kube-api-access-rkchw\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.642932 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-run-httpd\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.643023 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-config-data\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.643058 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-log-httpd\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.643260 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.643314 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-scripts\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.643341 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.643437 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.643668 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-run-httpd\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.643708 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-log-httpd\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.647538 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-config-data\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.648368 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-scripts\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.648838 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.648928 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.649889 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.674405 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkchw\" (UniqueName: \"kubernetes.io/projected/c2d9a088-c1a8-4505-a63c-cf1462097e73-kube-api-access-rkchw\") pod \"ceilometer-0\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.858220 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.913642 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eae4569-bd3d-4d00-ae67-75446c319be0" path="/var/lib/kubelet/pods/2eae4569-bd3d-4d00-ae67-75446c319be0/volumes" Feb 26 22:20:11 crc kubenswrapper[4910]: I0226 22:20:11.914396 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34522f42-d10f-4ccb-b40a-ba21f7dcbe56" path="/var/lib/kubelet/pods/34522f42-d10f-4ccb-b40a-ba21f7dcbe56/volumes" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.223485 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d37d540-f061-4881-9a00-3b0e0cfa1d47","Type":"ContainerStarted","Data":"9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a"} Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.223564 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d37d540-f061-4881-9a00-3b0e0cfa1d47","Type":"ContainerStarted","Data":"d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c"} Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.254654 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.254630294 podStartE2EDuration="2.254630294s" podCreationTimestamp="2026-02-26 22:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:20:12.24166 +0000 UTC m=+1497.321150591" watchObservedRunningTime="2026-02-26 22:20:12.254630294 +0000 UTC m=+1497.334120855" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.257820 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.398853 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.483109 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6sdhx"] Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.484416 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.490621 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.490813 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.502271 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sdhx"] Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.594086 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.594368 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-config-data\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.594420 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-scripts\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.594477 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prpwm\" (UniqueName: \"kubernetes.io/projected/98380662-1912-42a4-bf57-f5249871c687-kube-api-access-prpwm\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.696428 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.696472 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-config-data\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.696547 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-scripts\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.696623 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prpwm\" (UniqueName: \"kubernetes.io/projected/98380662-1912-42a4-bf57-f5249871c687-kube-api-access-prpwm\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.705476 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.708432 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-scripts\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.709913 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-config-data\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.717108 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prpwm\" (UniqueName: \"kubernetes.io/projected/98380662-1912-42a4-bf57-f5249871c687-kube-api-access-prpwm\") pod \"nova-cell1-cell-mapping-6sdhx\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:12 crc kubenswrapper[4910]: I0226 22:20:12.814561 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:13 crc kubenswrapper[4910]: I0226 22:20:13.270174 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerStarted","Data":"7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78"} Feb 26 22:20:13 crc kubenswrapper[4910]: I0226 22:20:13.270444 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerStarted","Data":"f3f7ba87bc7c3a1b7b9ba6026e891a8570be4ae892ffcae18fc3aa420d97f9e8"} Feb 26 22:20:13 crc kubenswrapper[4910]: I0226 22:20:13.288614 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sdhx"] Feb 26 22:20:13 crc kubenswrapper[4910]: W0226 22:20:13.354281 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98380662_1912_42a4_bf57_f5249871c687.slice/crio-e7daffa0c1eb14ef833453110db20f6a8a5936fe8fd7d43f092fdaaeb9c3b22e WatchSource:0}: Error finding container e7daffa0c1eb14ef833453110db20f6a8a5936fe8fd7d43f092fdaaeb9c3b22e: Status 404 returned error can't find the container with id e7daffa0c1eb14ef833453110db20f6a8a5936fe8fd7d43f092fdaaeb9c3b22e Feb 26 22:20:13 crc kubenswrapper[4910]: I0226 22:20:13.650362 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:20:13 crc kubenswrapper[4910]: I0226 22:20:13.723282 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-dmjkg"] Feb 26 22:20:13 crc kubenswrapper[4910]: I0226 22:20:13.723618 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" podUID="36f998e9-29f3-4464-9f84-8fa4e0efda10" containerName="dnsmasq-dns" containerID="cri-o://abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c" gracePeriod=10 Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.296439 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.298811 4910 generic.go:334] "Generic (PLEG): container finished" podID="36f998e9-29f3-4464-9f84-8fa4e0efda10" containerID="abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c" exitCode=0 Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.298865 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" event={"ID":"36f998e9-29f3-4464-9f84-8fa4e0efda10","Type":"ContainerDied","Data":"abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c"} Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.298889 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" event={"ID":"36f998e9-29f3-4464-9f84-8fa4e0efda10","Type":"ContainerDied","Data":"64b257352a380107695ee0e9e1d35751dece94ed4bff50f81d64f57a4a5b1d7a"} Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.298905 4910 scope.go:117] "RemoveContainer" containerID="abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.325932 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sdhx" event={"ID":"98380662-1912-42a4-bf57-f5249871c687","Type":"ContainerStarted","Data":"6b76c09195c50e7f6e6e1e22e58ea9c9ffe800a09cf1651e6e1deb06c4b9843a"} Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.326180 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sdhx" event={"ID":"98380662-1912-42a4-bf57-f5249871c687","Type":"ContainerStarted","Data":"e7daffa0c1eb14ef833453110db20f6a8a5936fe8fd7d43f092fdaaeb9c3b22e"} Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.348051 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-676ns\" (UniqueName: \"kubernetes.io/projected/36f998e9-29f3-4464-9f84-8fa4e0efda10-kube-api-access-676ns\") pod \"36f998e9-29f3-4464-9f84-8fa4e0efda10\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.348363 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-svc\") pod \"36f998e9-29f3-4464-9f84-8fa4e0efda10\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.348436 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-sb\") pod \"36f998e9-29f3-4464-9f84-8fa4e0efda10\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.348476 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-swift-storage-0\") pod \"36f998e9-29f3-4464-9f84-8fa4e0efda10\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.348508 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-config\") pod \"36f998e9-29f3-4464-9f84-8fa4e0efda10\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.348548 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-nb\") pod \"36f998e9-29f3-4464-9f84-8fa4e0efda10\" (UID: \"36f998e9-29f3-4464-9f84-8fa4e0efda10\") " Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.359315 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f998e9-29f3-4464-9f84-8fa4e0efda10-kube-api-access-676ns" (OuterVolumeSpecName: "kube-api-access-676ns") pod "36f998e9-29f3-4464-9f84-8fa4e0efda10" (UID: "36f998e9-29f3-4464-9f84-8fa4e0efda10"). InnerVolumeSpecName "kube-api-access-676ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.367503 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerStarted","Data":"3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2"} Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.396010 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6sdhx" podStartSLOduration=2.39598774 podStartE2EDuration="2.39598774s" podCreationTimestamp="2026-02-26 22:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:20:14.349537334 +0000 UTC m=+1499.429027875" watchObservedRunningTime="2026-02-26 22:20:14.39598774 +0000 UTC m=+1499.475478281" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.409441 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "36f998e9-29f3-4464-9f84-8fa4e0efda10" (UID: "36f998e9-29f3-4464-9f84-8fa4e0efda10"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.415894 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-config" (OuterVolumeSpecName: "config") pod "36f998e9-29f3-4464-9f84-8fa4e0efda10" (UID: "36f998e9-29f3-4464-9f84-8fa4e0efda10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.429684 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36f998e9-29f3-4464-9f84-8fa4e0efda10" (UID: "36f998e9-29f3-4464-9f84-8fa4e0efda10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.433114 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36f998e9-29f3-4464-9f84-8fa4e0efda10" (UID: "36f998e9-29f3-4464-9f84-8fa4e0efda10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.450912 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-676ns\" (UniqueName: \"kubernetes.io/projected/36f998e9-29f3-4464-9f84-8fa4e0efda10-kube-api-access-676ns\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.450970 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.450981 4910 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.450990 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.451000 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.457977 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36f998e9-29f3-4464-9f84-8fa4e0efda10" (UID: "36f998e9-29f3-4464-9f84-8fa4e0efda10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.553119 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36f998e9-29f3-4464-9f84-8fa4e0efda10-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.569397 4910 scope.go:117] "RemoveContainer" containerID="ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.596321 4910 scope.go:117] "RemoveContainer" containerID="abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c" Feb 26 22:20:14 crc kubenswrapper[4910]: E0226 22:20:14.598404 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c\": container with ID starting with abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c not found: ID does not exist" containerID="abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.598445 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c"} err="failed to get container status \"abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c\": rpc error: code = NotFound desc = could not find container \"abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c\": container with ID starting with abccd47f285ce015647aa4e941efcc1a22a3e2c34b2a83ef871e215a0888717c not found: ID does not exist" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.598471 4910 scope.go:117] "RemoveContainer" containerID="ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff" Feb 26 22:20:14 crc kubenswrapper[4910]: E0226 22:20:14.599084 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff\": container with ID starting with ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff not found: ID does not exist" containerID="ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff" Feb 26 22:20:14 crc kubenswrapper[4910]: I0226 22:20:14.599171 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff"} err="failed to get container status \"ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff\": rpc error: code = NotFound desc = could not find container \"ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff\": container with ID starting with ca633e175843035ef74e3ce28879fc0df2aefa0dee8de7c7bd579b35d454b1ff not found: ID does not exist" Feb 26 22:20:15 crc kubenswrapper[4910]: I0226 22:20:15.386170 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerStarted","Data":"e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa"} Feb 26 22:20:15 crc kubenswrapper[4910]: I0226 22:20:15.387912 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-dmjkg" Feb 26 22:20:15 crc kubenswrapper[4910]: I0226 22:20:15.436037 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-dmjkg"] Feb 26 22:20:15 crc kubenswrapper[4910]: I0226 22:20:15.444759 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-dmjkg"] Feb 26 22:20:15 crc kubenswrapper[4910]: I0226 22:20:15.928010 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f998e9-29f3-4464-9f84-8fa4e0efda10" path="/var/lib/kubelet/pods/36f998e9-29f3-4464-9f84-8fa4e0efda10/volumes" Feb 26 22:20:17 crc kubenswrapper[4910]: I0226 22:20:17.414441 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerStarted","Data":"5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504"} Feb 26 22:20:17 crc kubenswrapper[4910]: I0226 22:20:17.415971 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 22:20:17 crc kubenswrapper[4910]: I0226 22:20:17.452838 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.194207377 podStartE2EDuration="6.452802773s" podCreationTimestamp="2026-02-26 22:20:11 +0000 UTC" firstStartedPulling="2026-02-26 22:20:12.400390847 +0000 UTC m=+1497.479881388" lastFinishedPulling="2026-02-26 22:20:16.658986243 +0000 UTC m=+1501.738476784" observedRunningTime="2026-02-26 22:20:17.443447769 +0000 UTC m=+1502.522938350" watchObservedRunningTime="2026-02-26 22:20:17.452802773 +0000 UTC m=+1502.532293354" Feb 26 22:20:18 crc kubenswrapper[4910]: I0226 22:20:18.428357 4910 generic.go:334] "Generic (PLEG): container finished" podID="98380662-1912-42a4-bf57-f5249871c687" containerID="6b76c09195c50e7f6e6e1e22e58ea9c9ffe800a09cf1651e6e1deb06c4b9843a" exitCode=0 Feb 26 22:20:18 crc kubenswrapper[4910]: I0226 22:20:18.428463 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sdhx" event={"ID":"98380662-1912-42a4-bf57-f5249871c687","Type":"ContainerDied","Data":"6b76c09195c50e7f6e6e1e22e58ea9c9ffe800a09cf1651e6e1deb06c4b9843a"} Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.793760 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5qrpd"] Feb 26 22:20:19 crc kubenswrapper[4910]: E0226 22:20:19.795038 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f998e9-29f3-4464-9f84-8fa4e0efda10" containerName="dnsmasq-dns" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.795055 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f998e9-29f3-4464-9f84-8fa4e0efda10" containerName="dnsmasq-dns" Feb 26 22:20:19 crc kubenswrapper[4910]: E0226 22:20:19.795111 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f998e9-29f3-4464-9f84-8fa4e0efda10" containerName="init" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.795120 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f998e9-29f3-4464-9f84-8fa4e0efda10" containerName="init" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.795387 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f998e9-29f3-4464-9f84-8fa4e0efda10" containerName="dnsmasq-dns" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.797810 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.817627 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qrpd"] Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.885578 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-catalog-content\") pod \"redhat-operators-5qrpd\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.885652 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vj7\" (UniqueName: \"kubernetes.io/projected/fa71cf48-46a0-4082-9897-f03175b0abf7-kube-api-access-88vj7\") pod \"redhat-operators-5qrpd\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.885686 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-utilities\") pod \"redhat-operators-5qrpd\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.985409 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.987574 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-catalog-content\") pod \"redhat-operators-5qrpd\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.987626 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vj7\" (UniqueName: \"kubernetes.io/projected/fa71cf48-46a0-4082-9897-f03175b0abf7-kube-api-access-88vj7\") pod \"redhat-operators-5qrpd\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.987648 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-utilities\") pod \"redhat-operators-5qrpd\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.988037 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-catalog-content\") pod \"redhat-operators-5qrpd\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:19 crc kubenswrapper[4910]: I0226 22:20:19.988116 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-utilities\") pod \"redhat-operators-5qrpd\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.005994 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vj7\" (UniqueName: \"kubernetes.io/projected/fa71cf48-46a0-4082-9897-f03175b0abf7-kube-api-access-88vj7\") pod \"redhat-operators-5qrpd\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.088642 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-config-data\") pod \"98380662-1912-42a4-bf57-f5249871c687\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.088765 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-combined-ca-bundle\") pod \"98380662-1912-42a4-bf57-f5249871c687\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.088985 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prpwm\" (UniqueName: \"kubernetes.io/projected/98380662-1912-42a4-bf57-f5249871c687-kube-api-access-prpwm\") pod \"98380662-1912-42a4-bf57-f5249871c687\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.089049 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-scripts\") pod \"98380662-1912-42a4-bf57-f5249871c687\" (UID: \"98380662-1912-42a4-bf57-f5249871c687\") " Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.095264 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-scripts" (OuterVolumeSpecName: "scripts") pod "98380662-1912-42a4-bf57-f5249871c687" (UID: "98380662-1912-42a4-bf57-f5249871c687"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.095396 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98380662-1912-42a4-bf57-f5249871c687-kube-api-access-prpwm" (OuterVolumeSpecName: "kube-api-access-prpwm") pod "98380662-1912-42a4-bf57-f5249871c687" (UID: "98380662-1912-42a4-bf57-f5249871c687"). InnerVolumeSpecName "kube-api-access-prpwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.121329 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-config-data" (OuterVolumeSpecName: "config-data") pod "98380662-1912-42a4-bf57-f5249871c687" (UID: "98380662-1912-42a4-bf57-f5249871c687"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.128974 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.129914 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98380662-1912-42a4-bf57-f5249871c687" (UID: "98380662-1912-42a4-bf57-f5249871c687"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.191036 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prpwm\" (UniqueName: \"kubernetes.io/projected/98380662-1912-42a4-bf57-f5249871c687-kube-api-access-prpwm\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.191353 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.191364 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.191376 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98380662-1912-42a4-bf57-f5249871c687-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.453896 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sdhx" event={"ID":"98380662-1912-42a4-bf57-f5249871c687","Type":"ContainerDied","Data":"e7daffa0c1eb14ef833453110db20f6a8a5936fe8fd7d43f092fdaaeb9c3b22e"} Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.453946 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7daffa0c1eb14ef833453110db20f6a8a5936fe8fd7d43f092fdaaeb9c3b22e" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.454033 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sdhx" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.636407 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.636672 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.640073 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.653458 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.653641 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1b754ac5-70ff-4b7c-9385-61ca219f9bb8" containerName="nova-scheduler-scheduler" containerID="cri-o://27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b" gracePeriod=30 Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.663683 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qrpd"] Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.672936 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.673194 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-log" containerID="cri-o://8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405" gracePeriod=30 Feb 26 22:20:20 crc kubenswrapper[4910]: I0226 22:20:20.673317 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-metadata" containerID="cri-o://ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c" gracePeriod=30 Feb 26 22:20:21 crc kubenswrapper[4910]: I0226 22:20:21.496442 4910 generic.go:334] "Generic (PLEG): container finished" podID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerID="8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405" exitCode=143 Feb 26 22:20:21 crc kubenswrapper[4910]: I0226 22:20:21.496680 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23eb0be8-742b-4fa8-acea-74668f976e0c","Type":"ContainerDied","Data":"8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405"} Feb 26 22:20:21 crc kubenswrapper[4910]: I0226 22:20:21.527431 4910 generic.go:334] "Generic (PLEG): container finished" podID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerID="35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd" exitCode=0 Feb 26 22:20:21 crc kubenswrapper[4910]: I0226 22:20:21.527650 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-log" containerID="cri-o://d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c" gracePeriod=30 Feb 26 22:20:21 crc kubenswrapper[4910]: I0226 22:20:21.528799 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qrpd" event={"ID":"fa71cf48-46a0-4082-9897-f03175b0abf7","Type":"ContainerDied","Data":"35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd"} Feb 26 22:20:21 crc kubenswrapper[4910]: I0226 22:20:21.528823 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qrpd" event={"ID":"fa71cf48-46a0-4082-9897-f03175b0abf7","Type":"ContainerStarted","Data":"db683075f91080bbb26d7b8d4d1d1f9862a1bee3056d4eeed4472643ed1e6348"} Feb 26 22:20:21 crc kubenswrapper[4910]: I0226 22:20:21.529097 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-api" containerID="cri-o://9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a" gracePeriod=30 Feb 26 22:20:21 crc kubenswrapper[4910]: I0226 22:20:21.549241 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.237:8774/\": EOF" Feb 26 22:20:21 crc kubenswrapper[4910]: I0226 22:20:21.549257 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.237:8774/\": EOF" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.490792 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.539228 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-combined-ca-bundle\") pod \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.539277 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-config-data\") pod \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.539354 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28xcr\" (UniqueName: \"kubernetes.io/projected/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-kube-api-access-28xcr\") pod \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\" (UID: \"1b754ac5-70ff-4b7c-9385-61ca219f9bb8\") " Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.548725 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-kube-api-access-28xcr" (OuterVolumeSpecName: "kube-api-access-28xcr") pod "1b754ac5-70ff-4b7c-9385-61ca219f9bb8" (UID: "1b754ac5-70ff-4b7c-9385-61ca219f9bb8"). InnerVolumeSpecName "kube-api-access-28xcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.549263 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qrpd" event={"ID":"fa71cf48-46a0-4082-9897-f03175b0abf7","Type":"ContainerStarted","Data":"e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970"} Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.553121 4910 generic.go:334] "Generic (PLEG): container finished" podID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerID="d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c" exitCode=143 Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.553206 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d37d540-f061-4881-9a00-3b0e0cfa1d47","Type":"ContainerDied","Data":"d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c"} Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.554898 4910 generic.go:334] "Generic (PLEG): container finished" podID="1b754ac5-70ff-4b7c-9385-61ca219f9bb8" containerID="27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b" exitCode=0 Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.554942 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b754ac5-70ff-4b7c-9385-61ca219f9bb8","Type":"ContainerDied","Data":"27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b"} Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.554969 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b754ac5-70ff-4b7c-9385-61ca219f9bb8","Type":"ContainerDied","Data":"08d9ffe93405f1f37f23eccb0f0abf1c578a6475f2c2199dd9f71a245f6715fd"} Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.554994 4910 scope.go:117] "RemoveContainer" containerID="27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.555132 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.577152 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b754ac5-70ff-4b7c-9385-61ca219f9bb8" (UID: "1b754ac5-70ff-4b7c-9385-61ca219f9bb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.626840 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-config-data" (OuterVolumeSpecName: "config-data") pod "1b754ac5-70ff-4b7c-9385-61ca219f9bb8" (UID: "1b754ac5-70ff-4b7c-9385-61ca219f9bb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.627097 4910 scope.go:117] "RemoveContainer" containerID="27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b" Feb 26 22:20:22 crc kubenswrapper[4910]: E0226 22:20:22.627518 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b\": container with ID starting with 27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b not found: ID does not exist" containerID="27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.627550 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b"} err="failed to get container status \"27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b\": rpc error: code = NotFound desc = could not find container \"27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b\": container with ID starting with 27656c966a3db140a0eb8b0ad7691768feddbd0dc80f49a86b74f961ab9d6f0b not found: ID does not exist" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.641746 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.641772 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.641781 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28xcr\" (UniqueName: \"kubernetes.io/projected/1b754ac5-70ff-4b7c-9385-61ca219f9bb8-kube-api-access-28xcr\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.910365 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.924674 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.950043 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:20:22 crc kubenswrapper[4910]: E0226 22:20:22.950497 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b754ac5-70ff-4b7c-9385-61ca219f9bb8" containerName="nova-scheduler-scheduler" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.950514 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b754ac5-70ff-4b7c-9385-61ca219f9bb8" containerName="nova-scheduler-scheduler" Feb 26 22:20:22 crc kubenswrapper[4910]: E0226 22:20:22.950528 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98380662-1912-42a4-bf57-f5249871c687" containerName="nova-manage" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.950534 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="98380662-1912-42a4-bf57-f5249871c687" containerName="nova-manage" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.950748 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="98380662-1912-42a4-bf57-f5249871c687" containerName="nova-manage" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.950767 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b754ac5-70ff-4b7c-9385-61ca219f9bb8" containerName="nova-scheduler-scheduler" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.951635 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.953959 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 22:20:22 crc kubenswrapper[4910]: I0226 22:20:22.964554 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.048954 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08468f8-6dfe-4514-9737-87db33cb927c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b08468f8-6dfe-4514-9737-87db33cb927c\") " pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.049006 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gzmw\" (UniqueName: \"kubernetes.io/projected/b08468f8-6dfe-4514-9737-87db33cb927c-kube-api-access-5gzmw\") pod \"nova-scheduler-0\" (UID: \"b08468f8-6dfe-4514-9737-87db33cb927c\") " pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.049045 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08468f8-6dfe-4514-9737-87db33cb927c-config-data\") pod \"nova-scheduler-0\" (UID: \"b08468f8-6dfe-4514-9737-87db33cb927c\") " pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.150689 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08468f8-6dfe-4514-9737-87db33cb927c-config-data\") pod \"nova-scheduler-0\" (UID: \"b08468f8-6dfe-4514-9737-87db33cb927c\") " pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.151144 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08468f8-6dfe-4514-9737-87db33cb927c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b08468f8-6dfe-4514-9737-87db33cb927c\") " pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.151202 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gzmw\" (UniqueName: \"kubernetes.io/projected/b08468f8-6dfe-4514-9737-87db33cb927c-kube-api-access-5gzmw\") pod \"nova-scheduler-0\" (UID: \"b08468f8-6dfe-4514-9737-87db33cb927c\") " pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.158619 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08468f8-6dfe-4514-9737-87db33cb927c-config-data\") pod \"nova-scheduler-0\" (UID: \"b08468f8-6dfe-4514-9737-87db33cb927c\") " pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.173965 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08468f8-6dfe-4514-9737-87db33cb927c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b08468f8-6dfe-4514-9737-87db33cb927c\") " pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.182977 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gzmw\" (UniqueName: \"kubernetes.io/projected/b08468f8-6dfe-4514-9737-87db33cb927c-kube-api-access-5gzmw\") pod \"nova-scheduler-0\" (UID: \"b08468f8-6dfe-4514-9737-87db33cb927c\") " pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.279366 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.756640 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 22:20:23 crc kubenswrapper[4910]: W0226 22:20:23.760329 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb08468f8_6dfe_4514_9737_87db33cb927c.slice/crio-610bac7402d04938666475960ad1c17f91b453c0e2f9a207c3a0024f0c70612f WatchSource:0}: Error finding container 610bac7402d04938666475960ad1c17f91b453c0e2f9a207c3a0024f0c70612f: Status 404 returned error can't find the container with id 610bac7402d04938666475960ad1c17f91b453c0e2f9a207c3a0024f0c70612f Feb 26 22:20:23 crc kubenswrapper[4910]: I0226 22:20:23.922999 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b754ac5-70ff-4b7c-9385-61ca219f9bb8" path="/var/lib/kubelet/pods/1b754ac5-70ff-4b7c-9385-61ca219f9bb8/volumes" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.331068 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.390106 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-config-data\") pod \"23eb0be8-742b-4fa8-acea-74668f976e0c\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.390321 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzl6s\" (UniqueName: \"kubernetes.io/projected/23eb0be8-742b-4fa8-acea-74668f976e0c-kube-api-access-tzl6s\") pod \"23eb0be8-742b-4fa8-acea-74668f976e0c\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.390385 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-nova-metadata-tls-certs\") pod \"23eb0be8-742b-4fa8-acea-74668f976e0c\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.390590 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23eb0be8-742b-4fa8-acea-74668f976e0c-logs\") pod \"23eb0be8-742b-4fa8-acea-74668f976e0c\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.390652 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-combined-ca-bundle\") pod \"23eb0be8-742b-4fa8-acea-74668f976e0c\" (UID: \"23eb0be8-742b-4fa8-acea-74668f976e0c\") " Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.391447 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23eb0be8-742b-4fa8-acea-74668f976e0c-logs" (OuterVolumeSpecName: "logs") pod "23eb0be8-742b-4fa8-acea-74668f976e0c" (UID: "23eb0be8-742b-4fa8-acea-74668f976e0c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.391852 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23eb0be8-742b-4fa8-acea-74668f976e0c-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.400646 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eb0be8-742b-4fa8-acea-74668f976e0c-kube-api-access-tzl6s" (OuterVolumeSpecName: "kube-api-access-tzl6s") pod "23eb0be8-742b-4fa8-acea-74668f976e0c" (UID: "23eb0be8-742b-4fa8-acea-74668f976e0c"). InnerVolumeSpecName "kube-api-access-tzl6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.440440 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-config-data" (OuterVolumeSpecName: "config-data") pod "23eb0be8-742b-4fa8-acea-74668f976e0c" (UID: "23eb0be8-742b-4fa8-acea-74668f976e0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.447845 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23eb0be8-742b-4fa8-acea-74668f976e0c" (UID: "23eb0be8-742b-4fa8-acea-74668f976e0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.500713 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.500745 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.500757 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzl6s\" (UniqueName: \"kubernetes.io/projected/23eb0be8-742b-4fa8-acea-74668f976e0c-kube-api-access-tzl6s\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.528118 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "23eb0be8-742b-4fa8-acea-74668f976e0c" (UID: "23eb0be8-742b-4fa8-acea-74668f976e0c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.588584 4910 generic.go:334] "Generic (PLEG): container finished" podID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerID="ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c" exitCode=0 Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.588642 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23eb0be8-742b-4fa8-acea-74668f976e0c","Type":"ContainerDied","Data":"ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c"} Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.588669 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23eb0be8-742b-4fa8-acea-74668f976e0c","Type":"ContainerDied","Data":"db7173fa0be8cf00ca887aaf564b28e89232c17782d2422cb5cbacc8b3cc4787"} Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.588687 4910 scope.go:117] "RemoveContainer" containerID="ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.588833 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.591550 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b08468f8-6dfe-4514-9737-87db33cb927c","Type":"ContainerStarted","Data":"93f746a5a23eac8eda5e737f1bd6e7ada94de8412a76c96efe5227225053803d"} Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.591586 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b08468f8-6dfe-4514-9737-87db33cb927c","Type":"ContainerStarted","Data":"610bac7402d04938666475960ad1c17f91b453c0e2f9a207c3a0024f0c70612f"} Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.602348 4910 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23eb0be8-742b-4fa8-acea-74668f976e0c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.611779 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6117625589999998 podStartE2EDuration="2.611762559s" podCreationTimestamp="2026-02-26 22:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:20:24.60957236 +0000 UTC m=+1509.689062901" watchObservedRunningTime="2026-02-26 22:20:24.611762559 +0000 UTC m=+1509.691253100" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.615939 4910 scope.go:117] "RemoveContainer" containerID="8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.636237 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.643710 4910 scope.go:117] "RemoveContainer" containerID="ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c" Feb 26 22:20:24 crc kubenswrapper[4910]: E0226 22:20:24.645439 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c\": container with ID starting with ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c not found: ID does not exist" containerID="ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.645468 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c"} err="failed to get container status \"ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c\": rpc error: code = NotFound desc = could not find container \"ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c\": container with ID starting with ecb3c162daee7fdc2a5576102cae07381f25c8c96181ac4dd9cdcf5b9dc9428c not found: ID does not exist" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.645488 4910 scope.go:117] "RemoveContainer" containerID="8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405" Feb 26 22:20:24 crc kubenswrapper[4910]: E0226 22:20:24.646115 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405\": container with ID starting with 8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405 not found: ID does not exist" containerID="8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.646145 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405"} err="failed to get container status \"8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405\": rpc error: code = NotFound desc = could not find container \"8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405\": container with ID starting with 8944f6b02c5eab45baed548cd3695a36dbd1b3e0d3748882194817fa585e3405 not found: ID does not exist" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.648917 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.659852 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:20:24 crc kubenswrapper[4910]: E0226 22:20:24.660314 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-metadata" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.660330 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-metadata" Feb 26 22:20:24 crc kubenswrapper[4910]: E0226 22:20:24.660370 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-log" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.660377 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-log" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.660574 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-metadata" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.660599 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" containerName="nova-metadata-log" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.661733 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.665506 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.665704 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.668989 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.703954 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dc3d3d0-abef-4259-9c7b-42726f571be3-config-data\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.704005 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9vk\" (UniqueName: \"kubernetes.io/projected/3dc3d3d0-abef-4259-9c7b-42726f571be3-kube-api-access-6s9vk\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.704042 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dc3d3d0-abef-4259-9c7b-42726f571be3-logs\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.704113 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dc3d3d0-abef-4259-9c7b-42726f571be3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.704142 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dc3d3d0-abef-4259-9c7b-42726f571be3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.806092 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dc3d3d0-abef-4259-9c7b-42726f571be3-config-data\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.806473 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s9vk\" (UniqueName: \"kubernetes.io/projected/3dc3d3d0-abef-4259-9c7b-42726f571be3-kube-api-access-6s9vk\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.806528 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dc3d3d0-abef-4259-9c7b-42726f571be3-logs\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.806586 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dc3d3d0-abef-4259-9c7b-42726f571be3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.806604 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dc3d3d0-abef-4259-9c7b-42726f571be3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.806918 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dc3d3d0-abef-4259-9c7b-42726f571be3-logs\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.810624 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dc3d3d0-abef-4259-9c7b-42726f571be3-config-data\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.810684 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dc3d3d0-abef-4259-9c7b-42726f571be3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.810879 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dc3d3d0-abef-4259-9c7b-42726f571be3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.821468 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s9vk\" (UniqueName: \"kubernetes.io/projected/3dc3d3d0-abef-4259-9c7b-42726f571be3-kube-api-access-6s9vk\") pod \"nova-metadata-0\" (UID: \"3dc3d3d0-abef-4259-9c7b-42726f571be3\") " pod="openstack/nova-metadata-0" Feb 26 22:20:24 crc kubenswrapper[4910]: I0226 22:20:24.995918 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 22:20:25 crc kubenswrapper[4910]: I0226 22:20:25.657145 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 22:20:25 crc kubenswrapper[4910]: I0226 22:20:25.727558 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:20:25 crc kubenswrapper[4910]: I0226 22:20:25.727648 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:20:25 crc kubenswrapper[4910]: I0226 22:20:25.915103 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23eb0be8-742b-4fa8-acea-74668f976e0c" path="/var/lib/kubelet/pods/23eb0be8-742b-4fa8-acea-74668f976e0c/volumes" Feb 26 22:20:26 crc kubenswrapper[4910]: I0226 22:20:26.622387 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3dc3d3d0-abef-4259-9c7b-42726f571be3","Type":"ContainerStarted","Data":"1ca56ff65cca1503d0e3a3fcf6ff89456634559bd5aa539f6ba86afb0f7ae18d"} Feb 26 22:20:26 crc kubenswrapper[4910]: I0226 22:20:26.622664 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3dc3d3d0-abef-4259-9c7b-42726f571be3","Type":"ContainerStarted","Data":"1139376d4d10578a8904c5f9d498d9062a79c204aa923c3bbbe662ad0eae9309"} Feb 26 22:20:26 crc kubenswrapper[4910]: I0226 22:20:26.622675 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3dc3d3d0-abef-4259-9c7b-42726f571be3","Type":"ContainerStarted","Data":"f6920eb30cbb6e8251149430ce9227ae45f749e0c5c535c70d67ebac36c7cd72"} Feb 26 22:20:26 crc kubenswrapper[4910]: I0226 22:20:26.655875 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.655853154 podStartE2EDuration="2.655853154s" podCreationTimestamp="2026-02-26 22:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:20:26.645231825 +0000 UTC m=+1511.724722386" watchObservedRunningTime="2026-02-26 22:20:26.655853154 +0000 UTC m=+1511.735343695" Feb 26 22:20:27 crc kubenswrapper[4910]: I0226 22:20:27.640951 4910 generic.go:334] "Generic (PLEG): container finished" podID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerID="e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970" exitCode=0 Feb 26 22:20:27 crc kubenswrapper[4910]: I0226 22:20:27.641051 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qrpd" event={"ID":"fa71cf48-46a0-4082-9897-f03175b0abf7","Type":"ContainerDied","Data":"e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970"} Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.279918 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.561935 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.668448 4910 generic.go:334] "Generic (PLEG): container finished" podID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerID="9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a" exitCode=0 Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.668639 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d37d540-f061-4881-9a00-3b0e0cfa1d47","Type":"ContainerDied","Data":"9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a"} Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.668723 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d37d540-f061-4881-9a00-3b0e0cfa1d47","Type":"ContainerDied","Data":"509ff403703918a3590f5bdf48e5fe953b683305800948e29781acd59bf707d5"} Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.668738 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.668753 4910 scope.go:117] "RemoveContainer" containerID="9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.671651 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qrpd" event={"ID":"fa71cf48-46a0-4082-9897-f03175b0abf7","Type":"ContainerStarted","Data":"da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064"} Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.702438 4910 scope.go:117] "RemoveContainer" containerID="d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.703718 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5qrpd" podStartSLOduration=3.166278771 podStartE2EDuration="9.703694272s" podCreationTimestamp="2026-02-26 22:20:19 +0000 UTC" firstStartedPulling="2026-02-26 22:20:21.536591535 +0000 UTC m=+1506.616082076" lastFinishedPulling="2026-02-26 22:20:28.074007016 +0000 UTC m=+1513.153497577" observedRunningTime="2026-02-26 22:20:28.692544038 +0000 UTC m=+1513.772034589" watchObservedRunningTime="2026-02-26 22:20:28.703694272 +0000 UTC m=+1513.783184823" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.707645 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d37d540-f061-4881-9a00-3b0e0cfa1d47-logs\") pod \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.707713 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxqc9\" (UniqueName: \"kubernetes.io/projected/3d37d540-f061-4881-9a00-3b0e0cfa1d47-kube-api-access-zxqc9\") pod \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.707780 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-public-tls-certs\") pod \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.707833 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-config-data\") pod \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.707965 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-internal-tls-certs\") pod \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.708058 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-combined-ca-bundle\") pod \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\" (UID: \"3d37d540-f061-4881-9a00-3b0e0cfa1d47\") " Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.708183 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d37d540-f061-4881-9a00-3b0e0cfa1d47-logs" (OuterVolumeSpecName: "logs") pod "3d37d540-f061-4881-9a00-3b0e0cfa1d47" (UID: "3d37d540-f061-4881-9a00-3b0e0cfa1d47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.708973 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d37d540-f061-4881-9a00-3b0e0cfa1d47-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.726541 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d37d540-f061-4881-9a00-3b0e0cfa1d47-kube-api-access-zxqc9" (OuterVolumeSpecName: "kube-api-access-zxqc9") pod "3d37d540-f061-4881-9a00-3b0e0cfa1d47" (UID: "3d37d540-f061-4881-9a00-3b0e0cfa1d47"). InnerVolumeSpecName "kube-api-access-zxqc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.737226 4910 scope.go:117] "RemoveContainer" containerID="9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a" Feb 26 22:20:28 crc kubenswrapper[4910]: E0226 22:20:28.737808 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a\": container with ID starting with 9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a not found: ID does not exist" containerID="9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.737932 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a"} err="failed to get container status \"9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a\": rpc error: code = NotFound desc = could not find container \"9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a\": container with ID starting with 9e7bf357106900433eabef247cd4d8846785c6e3d5e029b2f66dc976ada01a9a not found: ID does not exist" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.738029 4910 scope.go:117] "RemoveContainer" containerID="d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c" Feb 26 22:20:28 crc kubenswrapper[4910]: E0226 22:20:28.738399 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c\": container with ID starting with d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c not found: ID does not exist" containerID="d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.738691 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c"} err="failed to get container status \"d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c\": rpc error: code = NotFound desc = could not find container \"d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c\": container with ID starting with d3f71a95557e1c6eb6c3ec24820f95e80020e7d9a140b2505ec1a15e20bebc5c not found: ID does not exist" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.746074 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-config-data" (OuterVolumeSpecName: "config-data") pod "3d37d540-f061-4881-9a00-3b0e0cfa1d47" (UID: "3d37d540-f061-4881-9a00-3b0e0cfa1d47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.760833 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d37d540-f061-4881-9a00-3b0e0cfa1d47" (UID: "3d37d540-f061-4881-9a00-3b0e0cfa1d47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.766332 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3d37d540-f061-4881-9a00-3b0e0cfa1d47" (UID: "3d37d540-f061-4881-9a00-3b0e0cfa1d47"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.784233 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3d37d540-f061-4881-9a00-3b0e0cfa1d47" (UID: "3d37d540-f061-4881-9a00-3b0e0cfa1d47"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.810729 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.810760 4910 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.810774 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.810783 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxqc9\" (UniqueName: \"kubernetes.io/projected/3d37d540-f061-4881-9a00-3b0e0cfa1d47-kube-api-access-zxqc9\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:28 crc kubenswrapper[4910]: I0226 22:20:28.810792 4910 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d37d540-f061-4881-9a00-3b0e0cfa1d47-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.030469 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.041948 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.053582 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:29 crc kubenswrapper[4910]: E0226 22:20:29.054269 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-api" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.054355 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-api" Feb 26 22:20:29 crc kubenswrapper[4910]: E0226 22:20:29.054506 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-log" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.054578 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-log" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.054900 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-log" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.055000 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" containerName="nova-api-api" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.056354 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.058949 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.059058 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.059889 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.078399 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.224145 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.224273 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.224321 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-config-data\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.224405 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0b80f69-d40e-460e-b205-ce0125d3b89b-logs\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.224452 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-public-tls-certs\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.224487 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8stzw\" (UniqueName: \"kubernetes.io/projected/b0b80f69-d40e-460e-b205-ce0125d3b89b-kube-api-access-8stzw\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.326366 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.326706 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-config-data\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.327245 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0b80f69-d40e-460e-b205-ce0125d3b89b-logs\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.327398 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-public-tls-certs\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.327497 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8stzw\" (UniqueName: \"kubernetes.io/projected/b0b80f69-d40e-460e-b205-ce0125d3b89b-kube-api-access-8stzw\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.327867 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.330463 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0b80f69-d40e-460e-b205-ce0125d3b89b-logs\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.331236 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-public-tls-certs\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.331360 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-config-data\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.331734 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.332044 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b80f69-d40e-460e-b205-ce0125d3b89b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.349059 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8stzw\" (UniqueName: \"kubernetes.io/projected/b0b80f69-d40e-460e-b205-ce0125d3b89b-kube-api-access-8stzw\") pod \"nova-api-0\" (UID: \"b0b80f69-d40e-460e-b205-ce0125d3b89b\") " pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.426623 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.882879 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 22:20:29 crc kubenswrapper[4910]: W0226 22:20:29.890107 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0b80f69_d40e_460e_b205_ce0125d3b89b.slice/crio-5fddd0ca64dc2927c33d673d05e62edf9fd6ce122c43dec1bc21f37b5c01552f WatchSource:0}: Error finding container 5fddd0ca64dc2927c33d673d05e62edf9fd6ce122c43dec1bc21f37b5c01552f: Status 404 returned error can't find the container with id 5fddd0ca64dc2927c33d673d05e62edf9fd6ce122c43dec1bc21f37b5c01552f Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.932770 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d37d540-f061-4881-9a00-3b0e0cfa1d47" path="/var/lib/kubelet/pods/3d37d540-f061-4881-9a00-3b0e0cfa1d47/volumes" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.996235 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 22:20:29 crc kubenswrapper[4910]: I0226 22:20:29.997234 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 22:20:30 crc kubenswrapper[4910]: I0226 22:20:30.129476 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:30 crc kubenswrapper[4910]: I0226 22:20:30.129519 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:20:30 crc kubenswrapper[4910]: I0226 22:20:30.214063 4910 scope.go:117] "RemoveContainer" containerID="7b8430bef34af7693be6574a606502a50becaac975bd436b68fae844775e5231" Feb 26 22:20:30 crc kubenswrapper[4910]: I0226 22:20:30.731513 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0b80f69-d40e-460e-b205-ce0125d3b89b","Type":"ContainerStarted","Data":"b45689a6c454d3882fe908e9db37a6fcce23ae714b50619e877457d750e5db10"} Feb 26 22:20:30 crc kubenswrapper[4910]: I0226 22:20:30.731796 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0b80f69-d40e-460e-b205-ce0125d3b89b","Type":"ContainerStarted","Data":"791bf4c66aa93f22a31d0bb1060c7d5135c0a5dc29f6d17138b6a9389935c561"} Feb 26 22:20:30 crc kubenswrapper[4910]: I0226 22:20:30.731806 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0b80f69-d40e-460e-b205-ce0125d3b89b","Type":"ContainerStarted","Data":"5fddd0ca64dc2927c33d673d05e62edf9fd6ce122c43dec1bc21f37b5c01552f"} Feb 26 22:20:30 crc kubenswrapper[4910]: I0226 22:20:30.749993 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.749978407 podStartE2EDuration="1.749978407s" podCreationTimestamp="2026-02-26 22:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:20:30.74604478 +0000 UTC m=+1515.825535321" watchObservedRunningTime="2026-02-26 22:20:30.749978407 +0000 UTC m=+1515.829468938" Feb 26 22:20:31 crc kubenswrapper[4910]: I0226 22:20:31.182436 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qrpd" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="registry-server" probeResult="failure" output=< Feb 26 22:20:31 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:20:31 crc kubenswrapper[4910]: > Feb 26 22:20:33 crc kubenswrapper[4910]: I0226 22:20:33.280727 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 22:20:33 crc kubenswrapper[4910]: I0226 22:20:33.321033 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 22:20:33 crc kubenswrapper[4910]: I0226 22:20:33.837092 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 22:20:34 crc kubenswrapper[4910]: I0226 22:20:34.996167 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 22:20:34 crc kubenswrapper[4910]: I0226 22:20:34.996211 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 22:20:36 crc kubenswrapper[4910]: I0226 22:20:36.007307 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3dc3d3d0-abef-4259-9c7b-42726f571be3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.242:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 22:20:36 crc kubenswrapper[4910]: I0226 22:20:36.007328 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3dc3d3d0-abef-4259-9c7b-42726f571be3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.242:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 22:20:39 crc kubenswrapper[4910]: I0226 22:20:39.427473 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 22:20:39 crc kubenswrapper[4910]: I0226 22:20:39.427882 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 22:20:40 crc kubenswrapper[4910]: I0226 22:20:40.442650 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b0b80f69-d40e-460e-b205-ce0125d3b89b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.243:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 22:20:40 crc kubenswrapper[4910]: I0226 22:20:40.442722 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b0b80f69-d40e-460e-b205-ce0125d3b89b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.243:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 22:20:41 crc kubenswrapper[4910]: I0226 22:20:41.177149 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qrpd" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="registry-server" probeResult="failure" output=< Feb 26 22:20:41 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:20:41 crc kubenswrapper[4910]: > Feb 26 22:20:41 crc kubenswrapper[4910]: I0226 22:20:41.875300 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 22:20:45 crc kubenswrapper[4910]: I0226 22:20:45.003590 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 22:20:45 crc kubenswrapper[4910]: I0226 22:20:45.005421 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 22:20:45 crc kubenswrapper[4910]: I0226 22:20:45.009967 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 22:20:45 crc kubenswrapper[4910]: I0226 22:20:45.938749 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 22:20:49 crc kubenswrapper[4910]: I0226 22:20:49.434797 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 22:20:49 crc kubenswrapper[4910]: I0226 22:20:49.435516 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 22:20:49 crc kubenswrapper[4910]: I0226 22:20:49.436759 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 22:20:49 crc kubenswrapper[4910]: I0226 22:20:49.442481 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 22:20:49 crc kubenswrapper[4910]: I0226 22:20:49.971762 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 22:20:49 crc kubenswrapper[4910]: I0226 22:20:49.984032 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 22:20:51 crc kubenswrapper[4910]: I0226 22:20:51.182385 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qrpd" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="registry-server" probeResult="failure" output=< Feb 26 22:20:51 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:20:51 crc kubenswrapper[4910]: > Feb 26 22:20:55 crc kubenswrapper[4910]: I0226 22:20:55.727375 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:20:55 crc kubenswrapper[4910]: I0226 22:20:55.728050 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.541296 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-68pwg"] Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.556866 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-68pwg"] Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.602868 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-htpl5"] Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.604694 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.610601 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.615414 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-htpl5"] Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.716287 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-scripts\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.716341 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-combined-ca-bundle\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.716475 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-config-data\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.716495 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w9tk\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-kube-api-access-9w9tk\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.716655 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-certs\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.819644 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-scripts\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.819753 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-combined-ca-bundle\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.819970 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-config-data\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.820026 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w9tk\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-kube-api-access-9w9tk\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.820135 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-certs\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.829761 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-certs\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.830220 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-scripts\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.833431 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-config-data\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.838234 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-combined-ca-bundle\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.851385 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w9tk\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-kube-api-access-9w9tk\") pod \"cloudkitty-db-sync-htpl5\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.920933 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="865f4842-373e-4bc9-98cd-4ceabb03b9f9" path="/var/lib/kubelet/pods/865f4842-373e-4bc9-98cd-4ceabb03b9f9/volumes" Feb 26 22:20:59 crc kubenswrapper[4910]: I0226 22:20:59.965598 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:21:00 crc kubenswrapper[4910]: I0226 22:21:00.186956 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:21:00 crc kubenswrapper[4910]: I0226 22:21:00.264487 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:21:00 crc kubenswrapper[4910]: I0226 22:21:00.435013 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qrpd"] Feb 26 22:21:00 crc kubenswrapper[4910]: I0226 22:21:00.503433 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-htpl5"] Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.100624 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-htpl5" event={"ID":"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97","Type":"ContainerStarted","Data":"cccf7e8913d66118eb539b24e2fb0e764ceb163bc1ca4dd517e793dd125b0b54"} Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.100922 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-htpl5" event={"ID":"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97","Type":"ContainerStarted","Data":"c3436e2b0107cae7779256d354967c0083baba36bff259bafe42f03667064297"} Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.125361 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-htpl5" podStartSLOduration=1.977528048 podStartE2EDuration="2.125320787s" podCreationTimestamp="2026-02-26 22:20:59 +0000 UTC" firstStartedPulling="2026-02-26 22:21:00.51320114 +0000 UTC m=+1545.592691681" lastFinishedPulling="2026-02-26 22:21:00.660993879 +0000 UTC m=+1545.740484420" observedRunningTime="2026-02-26 22:21:01.119667043 +0000 UTC m=+1546.199157574" watchObservedRunningTime="2026-02-26 22:21:01.125320787 +0000 UTC m=+1546.204811328" Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.155080 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.252953 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.801532 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.802036 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="ceilometer-central-agent" containerID="cri-o://7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78" gracePeriod=30 Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.802092 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="ceilometer-notification-agent" containerID="cri-o://3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2" gracePeriod=30 Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.802092 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="sg-core" containerID="cri-o://e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa" gracePeriod=30 Feb 26 22:21:01 crc kubenswrapper[4910]: I0226 22:21:01.802149 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="proxy-httpd" containerID="cri-o://5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504" gracePeriod=30 Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.113287 4910 generic.go:334] "Generic (PLEG): container finished" podID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerID="5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504" exitCode=0 Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.114233 4910 generic.go:334] "Generic (PLEG): container finished" podID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerID="e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa" exitCode=2 Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.113435 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerDied","Data":"5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504"} Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.114547 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerDied","Data":"e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa"} Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.114702 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5qrpd" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="registry-server" containerID="cri-o://da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064" gracePeriod=2 Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.780103 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.896693 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-catalog-content\") pod \"fa71cf48-46a0-4082-9897-f03175b0abf7\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.907424 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-utilities\") pod \"fa71cf48-46a0-4082-9897-f03175b0abf7\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.907493 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88vj7\" (UniqueName: \"kubernetes.io/projected/fa71cf48-46a0-4082-9897-f03175b0abf7-kube-api-access-88vj7\") pod \"fa71cf48-46a0-4082-9897-f03175b0abf7\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.908003 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-utilities" (OuterVolumeSpecName: "utilities") pod "fa71cf48-46a0-4082-9897-f03175b0abf7" (UID: "fa71cf48-46a0-4082-9897-f03175b0abf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.908276 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:02 crc kubenswrapper[4910]: I0226 22:21:02.915395 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa71cf48-46a0-4082-9897-f03175b0abf7-kube-api-access-88vj7" (OuterVolumeSpecName: "kube-api-access-88vj7") pod "fa71cf48-46a0-4082-9897-f03175b0abf7" (UID: "fa71cf48-46a0-4082-9897-f03175b0abf7"). InnerVolumeSpecName "kube-api-access-88vj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.009785 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa71cf48-46a0-4082-9897-f03175b0abf7" (UID: "fa71cf48-46a0-4082-9897-f03175b0abf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.010050 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-catalog-content\") pod \"fa71cf48-46a0-4082-9897-f03175b0abf7\" (UID: \"fa71cf48-46a0-4082-9897-f03175b0abf7\") " Feb 26 22:21:03 crc kubenswrapper[4910]: W0226 22:21:03.010408 4910 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fa71cf48-46a0-4082-9897-f03175b0abf7/volumes/kubernetes.io~empty-dir/catalog-content Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.010439 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa71cf48-46a0-4082-9897-f03175b0abf7" (UID: "fa71cf48-46a0-4082-9897-f03175b0abf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.011398 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa71cf48-46a0-4082-9897-f03175b0abf7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.011424 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88vj7\" (UniqueName: \"kubernetes.io/projected/fa71cf48-46a0-4082-9897-f03175b0abf7-kube-api-access-88vj7\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.126567 4910 generic.go:334] "Generic (PLEG): container finished" podID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerID="da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064" exitCode=0 Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.126643 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qrpd" event={"ID":"fa71cf48-46a0-4082-9897-f03175b0abf7","Type":"ContainerDied","Data":"da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064"} Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.126650 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qrpd" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.126672 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qrpd" event={"ID":"fa71cf48-46a0-4082-9897-f03175b0abf7","Type":"ContainerDied","Data":"db683075f91080bbb26d7b8d4d1d1f9862a1bee3056d4eeed4472643ed1e6348"} Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.126727 4910 scope.go:117] "RemoveContainer" containerID="da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.133610 4910 generic.go:334] "Generic (PLEG): container finished" podID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerID="7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78" exitCode=0 Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.133650 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerDied","Data":"7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78"} Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.162940 4910 scope.go:117] "RemoveContainer" containerID="e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.176875 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qrpd"] Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.188806 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5qrpd"] Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.202503 4910 scope.go:117] "RemoveContainer" containerID="35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.245781 4910 scope.go:117] "RemoveContainer" containerID="da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064" Feb 26 22:21:03 crc kubenswrapper[4910]: E0226 22:21:03.247708 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064\": container with ID starting with da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064 not found: ID does not exist" containerID="da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.247741 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064"} err="failed to get container status \"da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064\": rpc error: code = NotFound desc = could not find container \"da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064\": container with ID starting with da08dc4597390a7e42de3ef1c47ad7f6ce3950c372b69686d462639c444e2064 not found: ID does not exist" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.247760 4910 scope.go:117] "RemoveContainer" containerID="e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970" Feb 26 22:21:03 crc kubenswrapper[4910]: E0226 22:21:03.248551 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970\": container with ID starting with e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970 not found: ID does not exist" containerID="e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.248572 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970"} err="failed to get container status \"e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970\": rpc error: code = NotFound desc = could not find container \"e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970\": container with ID starting with e512b947205e443c57e2af9cb93d09eace1745f01093cf8b2a2b1fd1cb21a970 not found: ID does not exist" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.248584 4910 scope.go:117] "RemoveContainer" containerID="35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd" Feb 26 22:21:03 crc kubenswrapper[4910]: E0226 22:21:03.248820 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd\": container with ID starting with 35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd not found: ID does not exist" containerID="35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.248842 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd"} err="failed to get container status \"35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd\": rpc error: code = NotFound desc = could not find container \"35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd\": container with ID starting with 35519bb9b53c63db7f159aaa00bc56f5271b2ff80fd88c9b149f4790daebe1fd not found: ID does not exist" Feb 26 22:21:03 crc kubenswrapper[4910]: I0226 22:21:03.913659 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" path="/var/lib/kubelet/pods/fa71cf48-46a0-4082-9897-f03175b0abf7/volumes" Feb 26 22:21:04 crc kubenswrapper[4910]: I0226 22:21:04.145112 4910 generic.go:334] "Generic (PLEG): container finished" podID="5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" containerID="cccf7e8913d66118eb539b24e2fb0e764ceb163bc1ca4dd517e793dd125b0b54" exitCode=0 Feb 26 22:21:04 crc kubenswrapper[4910]: I0226 22:21:04.145199 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-htpl5" event={"ID":"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97","Type":"ContainerDied","Data":"cccf7e8913d66118eb539b24e2fb0e764ceb163bc1ca4dd517e793dd125b0b54"} Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.665115 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.764098 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-certs\") pod \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.764501 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-combined-ca-bundle\") pod \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.764623 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w9tk\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-kube-api-access-9w9tk\") pod \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.764747 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-config-data\") pod \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.765313 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-scripts\") pod \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\" (UID: \"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97\") " Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.775610 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-certs" (OuterVolumeSpecName: "certs") pod "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" (UID: "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.776768 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-scripts" (OuterVolumeSpecName: "scripts") pod "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" (UID: "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.782920 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-kube-api-access-9w9tk" (OuterVolumeSpecName: "kube-api-access-9w9tk") pod "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" (UID: "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97"). InnerVolumeSpecName "kube-api-access-9w9tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.793603 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-config-data" (OuterVolumeSpecName: "config-data") pod "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" (UID: "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.809659 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" (UID: "5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.867541 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.867573 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.867582 4910 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.867590 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:05 crc kubenswrapper[4910]: I0226 22:21:05.867600 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w9tk\" (UniqueName: \"kubernetes.io/projected/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97-kube-api-access-9w9tk\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.170051 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-htpl5" event={"ID":"5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97","Type":"ContainerDied","Data":"c3436e2b0107cae7779256d354967c0083baba36bff259bafe42f03667064297"} Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.170081 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-htpl5" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.170084 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3436e2b0107cae7779256d354967c0083baba36bff259bafe42f03667064297" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.235191 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-nwmcv"] Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.240508 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="48cec592-3a36-46fc-813d-bf8fa5212e89" containerName="rabbitmq" containerID="cri-o://ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7" gracePeriod=604795 Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.244965 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-nwmcv"] Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.258270 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" containerName="rabbitmq" containerID="cri-o://d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3" gracePeriod=604795 Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.365523 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-8sxfz"] Feb 26 22:21:06 crc kubenswrapper[4910]: E0226 22:21:06.365945 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="registry-server" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.365978 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="registry-server" Feb 26 22:21:06 crc kubenswrapper[4910]: E0226 22:21:06.365996 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="extract-utilities" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.366005 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="extract-utilities" Feb 26 22:21:06 crc kubenswrapper[4910]: E0226 22:21:06.366017 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" containerName="cloudkitty-db-sync" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.366026 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" containerName="cloudkitty-db-sync" Feb 26 22:21:06 crc kubenswrapper[4910]: E0226 22:21:06.366062 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="extract-content" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.366070 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="extract-content" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.366278 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa71cf48-46a0-4082-9897-f03175b0abf7" containerName="registry-server" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.366290 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" containerName="cloudkitty-db-sync" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.366977 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.368801 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.375577 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-8sxfz"] Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.479409 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hblvj\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-kube-api-access-hblvj\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.479494 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-certs\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.479532 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-scripts\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.479552 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-config-data\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.479651 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-combined-ca-bundle\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.581829 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hblvj\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-kube-api-access-hblvj\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.582108 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-certs\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.582204 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-scripts\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.582289 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-config-data\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.582384 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-combined-ca-bundle\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.585974 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-certs\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.586337 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-combined-ca-bundle\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.588276 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-scripts\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.597607 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-config-data\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.606773 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hblvj\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-kube-api-access-hblvj\") pod \"cloudkitty-storageinit-8sxfz\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:06 crc kubenswrapper[4910]: I0226 22:21:06.720437 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:07 crc kubenswrapper[4910]: I0226 22:21:07.185638 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-8sxfz"] Feb 26 22:21:07 crc kubenswrapper[4910]: I0226 22:21:07.911750 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:21:07 crc kubenswrapper[4910]: I0226 22:21:07.912255 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bef8b9-b550-4583-919c-cf05871439ad" path="/var/lib/kubelet/pods/b8bef8b9-b550-4583-919c-cf05871439ad/volumes" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013128 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkchw\" (UniqueName: \"kubernetes.io/projected/c2d9a088-c1a8-4505-a63c-cf1462097e73-kube-api-access-rkchw\") pod \"c2d9a088-c1a8-4505-a63c-cf1462097e73\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013246 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-run-httpd\") pod \"c2d9a088-c1a8-4505-a63c-cf1462097e73\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013350 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-combined-ca-bundle\") pod \"c2d9a088-c1a8-4505-a63c-cf1462097e73\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013372 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-sg-core-conf-yaml\") pod \"c2d9a088-c1a8-4505-a63c-cf1462097e73\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013394 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-ceilometer-tls-certs\") pod \"c2d9a088-c1a8-4505-a63c-cf1462097e73\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013413 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-scripts\") pod \"c2d9a088-c1a8-4505-a63c-cf1462097e73\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013513 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-log-httpd\") pod \"c2d9a088-c1a8-4505-a63c-cf1462097e73\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013566 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-config-data\") pod \"c2d9a088-c1a8-4505-a63c-cf1462097e73\" (UID: \"c2d9a088-c1a8-4505-a63c-cf1462097e73\") " Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013790 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c2d9a088-c1a8-4505-a63c-cf1462097e73" (UID: "c2d9a088-c1a8-4505-a63c-cf1462097e73"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.013999 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c2d9a088-c1a8-4505-a63c-cf1462097e73" (UID: "c2d9a088-c1a8-4505-a63c-cf1462097e73"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.014506 4910 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.014526 4910 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2d9a088-c1a8-4505-a63c-cf1462097e73-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.018390 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d9a088-c1a8-4505-a63c-cf1462097e73-kube-api-access-rkchw" (OuterVolumeSpecName: "kube-api-access-rkchw") pod "c2d9a088-c1a8-4505-a63c-cf1462097e73" (UID: "c2d9a088-c1a8-4505-a63c-cf1462097e73"). InnerVolumeSpecName "kube-api-access-rkchw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.019284 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-scripts" (OuterVolumeSpecName: "scripts") pod "c2d9a088-c1a8-4505-a63c-cf1462097e73" (UID: "c2d9a088-c1a8-4505-a63c-cf1462097e73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.040592 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c2d9a088-c1a8-4505-a63c-cf1462097e73" (UID: "c2d9a088-c1a8-4505-a63c-cf1462097e73"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.071502 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c2d9a088-c1a8-4505-a63c-cf1462097e73" (UID: "c2d9a088-c1a8-4505-a63c-cf1462097e73"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.095262 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2d9a088-c1a8-4505-a63c-cf1462097e73" (UID: "c2d9a088-c1a8-4505-a63c-cf1462097e73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.119707 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkchw\" (UniqueName: \"kubernetes.io/projected/c2d9a088-c1a8-4505-a63c-cf1462097e73-kube-api-access-rkchw\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.119746 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.119758 4910 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.119769 4910 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.119782 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.145772 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-config-data" (OuterVolumeSpecName: "config-data") pod "c2d9a088-c1a8-4505-a63c-cf1462097e73" (UID: "c2d9a088-c1a8-4505-a63c-cf1462097e73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.192386 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-8sxfz" event={"ID":"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72","Type":"ContainerStarted","Data":"f397678c0b585151649acdd07d75770fdd5b84ce34b621a7827b679a9e2cb54f"} Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.192738 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-8sxfz" event={"ID":"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72","Type":"ContainerStarted","Data":"2cadb3f86b3fc2b9a5adf6ec4611da5d804fbe13563bdcdf652d7e78aa4f0dbd"} Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.196380 4910 generic.go:334] "Generic (PLEG): container finished" podID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerID="3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2" exitCode=0 Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.196419 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerDied","Data":"3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2"} Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.196443 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2d9a088-c1a8-4505-a63c-cf1462097e73","Type":"ContainerDied","Data":"f3f7ba87bc7c3a1b7b9ba6026e891a8570be4ae892ffcae18fc3aa420d97f9e8"} Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.196459 4910 scope.go:117] "RemoveContainer" containerID="5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.196575 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.213492 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-8sxfz" podStartSLOduration=2.213473352 podStartE2EDuration="2.213473352s" podCreationTimestamp="2026-02-26 22:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:21:08.204851847 +0000 UTC m=+1553.284342388" watchObservedRunningTime="2026-02-26 22:21:08.213473352 +0000 UTC m=+1553.292963893" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.221239 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d9a088-c1a8-4505-a63c-cf1462097e73-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.225132 4910 scope.go:117] "RemoveContainer" containerID="e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.260588 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.279948 4910 scope.go:117] "RemoveContainer" containerID="3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.283843 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.293477 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:21:08 crc kubenswrapper[4910]: E0226 22:21:08.293908 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="proxy-httpd" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.293920 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="proxy-httpd" Feb 26 22:21:08 crc kubenswrapper[4910]: E0226 22:21:08.293935 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="ceilometer-notification-agent" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.293941 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="ceilometer-notification-agent" Feb 26 22:21:08 crc kubenswrapper[4910]: E0226 22:21:08.293952 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="sg-core" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.293958 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="sg-core" Feb 26 22:21:08 crc kubenswrapper[4910]: E0226 22:21:08.293973 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="ceilometer-central-agent" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.293979 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="ceilometer-central-agent" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.294174 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="ceilometer-notification-agent" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.294190 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="proxy-httpd" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.294202 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="sg-core" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.294222 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" containerName="ceilometer-central-agent" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.311726 4910 scope.go:117] "RemoveContainer" containerID="7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.313902 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.314013 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.317857 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.318378 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.320175 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.341598 4910 scope.go:117] "RemoveContainer" containerID="5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504" Feb 26 22:21:08 crc kubenswrapper[4910]: E0226 22:21:08.341958 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504\": container with ID starting with 5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504 not found: ID does not exist" containerID="5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.342005 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504"} err="failed to get container status \"5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504\": rpc error: code = NotFound desc = could not find container \"5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504\": container with ID starting with 5c6d76b86a8687a69686b0953f686edf87fc87657476233ab812ab41e040e504 not found: ID does not exist" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.342035 4910 scope.go:117] "RemoveContainer" containerID="e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa" Feb 26 22:21:08 crc kubenswrapper[4910]: E0226 22:21:08.342310 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa\": container with ID starting with e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa not found: ID does not exist" containerID="e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.343391 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa"} err="failed to get container status \"e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa\": rpc error: code = NotFound desc = could not find container \"e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa\": container with ID starting with e50e6df3495d2588053f2f9d6a3c099faa493f97700748f4a9873c53c3d98eaa not found: ID does not exist" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.343411 4910 scope.go:117] "RemoveContainer" containerID="3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2" Feb 26 22:21:08 crc kubenswrapper[4910]: E0226 22:21:08.343749 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2\": container with ID starting with 3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2 not found: ID does not exist" containerID="3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.343800 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2"} err="failed to get container status \"3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2\": rpc error: code = NotFound desc = could not find container \"3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2\": container with ID starting with 3f3313f21ba7b1419192c51e1ff097ed22102818a53e72da4e5824e2c12e9ee2 not found: ID does not exist" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.343831 4910 scope.go:117] "RemoveContainer" containerID="7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78" Feb 26 22:21:08 crc kubenswrapper[4910]: E0226 22:21:08.344128 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78\": container with ID starting with 7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78 not found: ID does not exist" containerID="7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.344192 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78"} err="failed to get container status \"7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78\": rpc error: code = NotFound desc = could not find container \"7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78\": container with ID starting with 7b4497e1cae78bcd112c80adeb8009d6795edddb74467529277c4c0be6d3bd78 not found: ID does not exist" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.432154 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.432296 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfllp\" (UniqueName: \"kubernetes.io/projected/62dbf00f-5cf4-4400-8eb3-f861fadda173-kube-api-access-nfllp\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.432322 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dbf00f-5cf4-4400-8eb3-f861fadda173-run-httpd\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.432355 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-scripts\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.432379 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.432410 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-config-data\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.432552 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.432903 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dbf00f-5cf4-4400-8eb3-f861fadda173-log-httpd\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.481851 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="48cec592-3a36-46fc-813d-bf8fa5212e89" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.110:5671: connect: connection refused" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535181 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535234 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfllp\" (UniqueName: \"kubernetes.io/projected/62dbf00f-5cf4-4400-8eb3-f861fadda173-kube-api-access-nfllp\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535258 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dbf00f-5cf4-4400-8eb3-f861fadda173-run-httpd\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535276 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-scripts\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535298 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535324 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-config-data\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535364 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535476 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dbf00f-5cf4-4400-8eb3-f861fadda173-log-httpd\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535906 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dbf00f-5cf4-4400-8eb3-f861fadda173-run-httpd\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.535942 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dbf00f-5cf4-4400-8eb3-f861fadda173-log-httpd\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.540422 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.541450 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-config-data\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.541948 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.542961 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.544969 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62dbf00f-5cf4-4400-8eb3-f861fadda173-scripts\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.552731 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfllp\" (UniqueName: \"kubernetes.io/projected/62dbf00f-5cf4-4400-8eb3-f861fadda173-kube-api-access-nfllp\") pod \"ceilometer-0\" (UID: \"62dbf00f-5cf4-4400-8eb3-f861fadda173\") " pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.634304 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 22:21:08 crc kubenswrapper[4910]: I0226 22:21:08.772639 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Feb 26 22:21:09 crc kubenswrapper[4910]: I0226 22:21:09.107427 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 22:21:09 crc kubenswrapper[4910]: I0226 22:21:09.211665 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dbf00f-5cf4-4400-8eb3-f861fadda173","Type":"ContainerStarted","Data":"e92c639f04f4bd41733fd8297b0518d73cb8775650611bfd02c191a303e0a1ce"} Feb 26 22:21:09 crc kubenswrapper[4910]: I0226 22:21:09.216845 4910 generic.go:334] "Generic (PLEG): container finished" podID="3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" containerID="f397678c0b585151649acdd07d75770fdd5b84ce34b621a7827b679a9e2cb54f" exitCode=0 Feb 26 22:21:09 crc kubenswrapper[4910]: I0226 22:21:09.216915 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-8sxfz" event={"ID":"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72","Type":"ContainerDied","Data":"f397678c0b585151649acdd07d75770fdd5b84ce34b621a7827b679a9e2cb54f"} Feb 26 22:21:09 crc kubenswrapper[4910]: I0226 22:21:09.930752 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d9a088-c1a8-4505-a63c-cf1462097e73" path="/var/lib/kubelet/pods/c2d9a088-c1a8-4505-a63c-cf1462097e73/volumes" Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.785961 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.850081 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-combined-ca-bundle\") pod \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.850434 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-scripts\") pod \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.860248 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-scripts" (OuterVolumeSpecName: "scripts") pod "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" (UID: "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.922992 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" (UID: "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.973512 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hblvj\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-kube-api-access-hblvj\") pod \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.973552 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-certs\") pod \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.973605 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-config-data\") pod \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\" (UID: \"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72\") " Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.974428 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.974440 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.995915 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-kube-api-access-hblvj" (OuterVolumeSpecName: "kube-api-access-hblvj") pod "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" (UID: "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72"). InnerVolumeSpecName "kube-api-access-hblvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:12 crc kubenswrapper[4910]: I0226 22:21:12.997448 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-certs" (OuterVolumeSpecName: "certs") pod "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" (UID: "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.008616 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-config-data" (OuterVolumeSpecName: "config-data") pod "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" (UID: "3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.026682 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.076626 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hblvj\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-kube-api-access-hblvj\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.076659 4910 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.076671 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.097282 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.178612 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.178676 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-tls\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.178714 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-erlang-cookie\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.178743 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f98f3d3a-39ee-4b35-8653-ae334df58fca-erlang-cookie-secret\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.178761 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f98f3d3a-39ee-4b35-8653-ae334df58fca-pod-info\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.178869 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x94w7\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-kube-api-access-x94w7\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.178888 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-confd\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.178930 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-plugins\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.178948 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-server-conf\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.179003 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-plugins-conf\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.179059 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-config-data\") pod \"f98f3d3a-39ee-4b35-8653-ae334df58fca\" (UID: \"f98f3d3a-39ee-4b35-8653-ae334df58fca\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.186964 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.188283 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f98f3d3a-39ee-4b35-8653-ae334df58fca-pod-info" (OuterVolumeSpecName: "pod-info") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.188698 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.206985 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.214798 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-kube-api-access-x94w7" (OuterVolumeSpecName: "kube-api-access-x94w7") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "kube-api-access-x94w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.220358 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.247628 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98f3d3a-39ee-4b35-8653-ae334df58fca-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.274154 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dbf00f-5cf4-4400-8eb3-f861fadda173","Type":"ContainerStarted","Data":"73ac51ac2d06d1de190b6910ecfbebf7183dec9f9f0e05e34fdc29416380e79a"} Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.274902 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268" (OuterVolumeSpecName: "persistence") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "pvc-b4973784-2c8e-4725-bfc4-64e42ff04268". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.280231 4910 generic.go:334] "Generic (PLEG): container finished" podID="f98f3d3a-39ee-4b35-8653-ae334df58fca" containerID="d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3" exitCode=0 Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.280298 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f98f3d3a-39ee-4b35-8653-ae334df58fca","Type":"ContainerDied","Data":"d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3"} Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.280325 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f98f3d3a-39ee-4b35-8653-ae334df58fca","Type":"ContainerDied","Data":"c3ee1d0ebff3456c208f3b1653cd7a56e0bc1523fb56dbe5ab2b978827841483"} Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.280342 4910 scope.go:117] "RemoveContainer" containerID="d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.280480 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.282606 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-config-data\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.282648 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-confd\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.282678 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-plugins\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.282704 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kst6m\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-kube-api-access-kst6m\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.283794 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.287406 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.287517 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-plugins-conf\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.287587 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48cec592-3a36-46fc-813d-bf8fa5212e89-pod-info\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.287621 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-erlang-cookie\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.287712 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-tls\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.287757 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-server-conf\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.287788 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48cec592-3a36-46fc-813d-bf8fa5212e89-erlang-cookie-secret\") pod \"48cec592-3a36-46fc-813d-bf8fa5212e89\" (UID: \"48cec592-3a36-46fc-813d-bf8fa5212e89\") " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.288425 4910 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f98f3d3a-39ee-4b35-8653-ae334df58fca-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.288443 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x94w7\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-kube-api-access-x94w7\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.288456 4910 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.288466 4910 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.288475 4910 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.288503 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") on node \"crc\" " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.288515 4910 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.288530 4910 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.288546 4910 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f98f3d3a-39ee-4b35-8653-ae334df58fca-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.289561 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.292572 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.310663 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-kube-api-access-kst6m" (OuterVolumeSpecName: "kube-api-access-kst6m") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "kube-api-access-kst6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.312896 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cec592-3a36-46fc-813d-bf8fa5212e89-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.312927 4910 generic.go:334] "Generic (PLEG): container finished" podID="48cec592-3a36-46fc-813d-bf8fa5212e89" containerID="ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7" exitCode=0 Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.313001 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.313022 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"48cec592-3a36-46fc-813d-bf8fa5212e89","Type":"ContainerDied","Data":"ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7"} Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.313048 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"48cec592-3a36-46fc-813d-bf8fa5212e89","Type":"ContainerDied","Data":"e6a8b6c68847bb048f1665671ddd86908fcebba5217932bfbf05505ea91b7553"} Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.320125 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-8sxfz" event={"ID":"3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72","Type":"ContainerDied","Data":"2cadb3f86b3fc2b9a5adf6ec4611da5d804fbe13563bdcdf652d7e78aa4f0dbd"} Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.320191 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cadb3f86b3fc2b9a5adf6ec4611da5d804fbe13563bdcdf652d7e78aa4f0dbd" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.320237 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-8sxfz" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.333750 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/48cec592-3a36-46fc-813d-bf8fa5212e89-pod-info" (OuterVolumeSpecName: "pod-info") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.335719 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.377827 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-config-data" (OuterVolumeSpecName: "config-data") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.398920 4910 scope.go:117] "RemoveContainer" containerID="b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.405942 4910 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.405975 4910 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48cec592-3a36-46fc-813d-bf8fa5212e89-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.405985 4910 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.406000 4910 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.406008 4910 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48cec592-3a36-46fc-813d-bf8fa5212e89-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.406016 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.406024 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kst6m\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-kube-api-access-kst6m\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.420005 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520" (OuterVolumeSpecName: "persistence") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "pvc-38ab41ea-5004-4a99-bb29-739bde4c1520". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.421029 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-server-conf" (OuterVolumeSpecName: "server-conf") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.425595 4910 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.426237 4910 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4973784-2c8e-4725-bfc4-64e42ff04268" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268") on node "crc" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.450120 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-config-data" (OuterVolumeSpecName: "config-data") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.510260 4910 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f98f3d3a-39ee-4b35-8653-ae334df58fca-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.510696 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.510732 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") on node \"crc\" " Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.510745 4910 reconciler_common.go:293] "Volume detached for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.524880 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-server-conf" (OuterVolumeSpecName: "server-conf") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.528404 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f98f3d3a-39ee-4b35-8653-ae334df58fca" (UID: "f98f3d3a-39ee-4b35-8653-ae334df58fca"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.548617 4910 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.548784 4910 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-38ab41ea-5004-4a99-bb29-739bde4c1520" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520") on node "crc" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.560418 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "48cec592-3a36-46fc-813d-bf8fa5212e89" (UID: "48cec592-3a36-46fc-813d-bf8fa5212e89"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.611523 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-29l2r"] Feb 26 22:21:13 crc kubenswrapper[4910]: E0226 22:21:13.612009 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" containerName="rabbitmq" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.612022 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" containerName="rabbitmq" Feb 26 22:21:13 crc kubenswrapper[4910]: E0226 22:21:13.612046 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cec592-3a36-46fc-813d-bf8fa5212e89" containerName="setup-container" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.612053 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cec592-3a36-46fc-813d-bf8fa5212e89" containerName="setup-container" Feb 26 22:21:13 crc kubenswrapper[4910]: E0226 22:21:13.612065 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cec592-3a36-46fc-813d-bf8fa5212e89" containerName="rabbitmq" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.612072 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cec592-3a36-46fc-813d-bf8fa5212e89" containerName="rabbitmq" Feb 26 22:21:13 crc kubenswrapper[4910]: E0226 22:21:13.612081 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" containerName="cloudkitty-storageinit" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.612087 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" containerName="cloudkitty-storageinit" Feb 26 22:21:13 crc kubenswrapper[4910]: E0226 22:21:13.612100 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" containerName="setup-container" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.612106 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" containerName="setup-container" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.612295 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" containerName="cloudkitty-storageinit" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.612304 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cec592-3a36-46fc-813d-bf8fa5212e89" containerName="rabbitmq" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.612319 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" containerName="rabbitmq" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.613440 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.614149 4910 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48cec592-3a36-46fc-813d-bf8fa5212e89-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.614192 4910 reconciler_common.go:293] "Volume detached for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.614204 4910 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f98f3d3a-39ee-4b35-8653-ae334df58fca-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.614213 4910 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48cec592-3a36-46fc-813d-bf8fa5212e89-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.617306 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.626274 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-29l2r"] Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.669029 4910 scope.go:117] "RemoveContainer" containerID="d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3" Feb 26 22:21:13 crc kubenswrapper[4910]: E0226 22:21:13.669491 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3\": container with ID starting with d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3 not found: ID does not exist" containerID="d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.669534 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3"} err="failed to get container status \"d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3\": rpc error: code = NotFound desc = could not find container \"d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3\": container with ID starting with d3b40ae6ed787f50864aace32c2e168e4836c1bbb61c2f602f153a76858b2ea3 not found: ID does not exist" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.669561 4910 scope.go:117] "RemoveContainer" containerID="b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f" Feb 26 22:21:13 crc kubenswrapper[4910]: E0226 22:21:13.669871 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f\": container with ID starting with b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f not found: ID does not exist" containerID="b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.669899 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f"} err="failed to get container status \"b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f\": rpc error: code = NotFound desc = could not find container \"b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f\": container with ID starting with b96711ba619b912d8f11b4d929957237c2a28332da6273f5f952a97af08e2e3f not found: ID does not exist" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.669921 4910 scope.go:117] "RemoveContainer" containerID="ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.715866 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.716130 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.716205 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.716247 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.716280 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5dt\" (UniqueName: \"kubernetes.io/projected/86c9ac9a-e5f3-44c9-b009-c174b942eb90-kube-api-access-wm5dt\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.716298 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-config\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.716332 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.725244 4910 scope.go:117] "RemoveContainer" containerID="d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.736651 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.747654 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.770406 4910 scope.go:117] "RemoveContainer" containerID="ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.771136 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.773003 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.779739 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.779942 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.780072 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.780226 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.780358 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.780498 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.780618 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 22:21:13 crc kubenswrapper[4910]: E0226 22:21:13.780724 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7\": container with ID starting with ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7 not found: ID does not exist" containerID="ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.780770 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7"} err="failed to get container status \"ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7\": rpc error: code = NotFound desc = could not find container \"ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7\": container with ID starting with ff3618ddefebac24c69fef739a56674f82de2bf2a737cac1028f6c34fd9e0ce7 not found: ID does not exist" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.780802 4910 scope.go:117] "RemoveContainer" containerID="d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70" Feb 26 22:21:13 crc kubenswrapper[4910]: E0226 22:21:13.790590 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70\": container with ID starting with d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70 not found: ID does not exist" containerID="d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.790641 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70"} err="failed to get container status \"d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70\": rpc error: code = NotFound desc = could not find container \"d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70\": container with ID starting with d0d0a196bfe2898994596352c3c4f18f6c775b6517621b2152a31f6037ee7d70 not found: ID does not exist" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.791176 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ddj2c" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.822711 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-config\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.822756 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm5dt\" (UniqueName: \"kubernetes.io/projected/86c9ac9a-e5f3-44c9-b009-c174b942eb90-kube-api-access-wm5dt\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.822818 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.822918 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.822958 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.823029 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.823087 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.823590 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-config\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.823646 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.824007 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.824206 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.824633 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.825148 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.830986 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.851323 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.858102 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm5dt\" (UniqueName: \"kubernetes.io/projected/86c9ac9a-e5f3-44c9-b009-c174b942eb90-kube-api-access-wm5dt\") pod \"dnsmasq-dns-dc7c944bf-29l2r\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.878106 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.881136 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.885310 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.885371 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.885326 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.885550 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.885621 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.885682 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.885859 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tnj7p" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.927878 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.927969 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxzr\" (UniqueName: \"kubernetes.io/projected/d43ea280-40c0-430e-8d12-41a3522f4f29-kube-api-access-xpxzr\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.928014 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.928050 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.928076 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.928097 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d43ea280-40c0-430e-8d12-41a3522f4f29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.928154 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d43ea280-40c0-430e-8d12-41a3522f4f29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.928202 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d43ea280-40c0-430e-8d12-41a3522f4f29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.928237 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d43ea280-40c0-430e-8d12-41a3522f4f29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.928331 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d43ea280-40c0-430e-8d12-41a3522f4f29-config-data\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.928355 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.940725 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48cec592-3a36-46fc-813d-bf8fa5212e89" path="/var/lib/kubelet/pods/48cec592-3a36-46fc-813d-bf8fa5212e89/volumes" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.942923 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98f3d3a-39ee-4b35-8653-ae334df58fca" path="/var/lib/kubelet/pods/f98f3d3a-39ee-4b35-8653-ae334df58fca/volumes" Feb 26 22:21:13 crc kubenswrapper[4910]: I0226 22:21:13.944999 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.007333 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.007776 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="6d3b6f00-440e-46b6-a08a-a3219e244da6" containerName="cloudkitty-proc" containerID="cri-o://4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6" gracePeriod=30 Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.018297 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.031400 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.031628 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.031722 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d43ea280-40c0-430e-8d12-41a3522f4f29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.031810 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1267de00-e6b5-4340-b2e4-5614288011dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.031888 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d43ea280-40c0-430e-8d12-41a3522f4f29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.031972 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d43ea280-40c0-430e-8d12-41a3522f4f29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.032036 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.032110 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d43ea280-40c0-430e-8d12-41a3522f4f29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.032226 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1267de00-e6b5-4340-b2e4-5614288011dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.032310 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1267de00-e6b5-4340-b2e4-5614288011dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.032355 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.031908 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.032596 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerName="cloudkitty-api-log" containerID="cri-o://95753abeb95bd8c5572abc6e0b17c0930c983265d1f0f20e015753595af688ae" gracePeriod=30 Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.033026 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerName="cloudkitty-api" containerID="cri-o://d65dd4846bab7638b3db1bfe7b618eefd48c2ccb4dffc59c7c3fc6677f663022" gracePeriod=30 Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.036592 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d43ea280-40c0-430e-8d12-41a3522f4f29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037113 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d43ea280-40c0-430e-8d12-41a3522f4f29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037265 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037380 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d43ea280-40c0-430e-8d12-41a3522f4f29-config-data\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037407 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037431 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1267de00-e6b5-4340-b2e4-5614288011dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037466 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037491 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037510 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hzvp\" (UniqueName: \"kubernetes.io/projected/1267de00-e6b5-4340-b2e4-5614288011dc-kube-api-access-2hzvp\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037578 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037631 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1267de00-e6b5-4340-b2e4-5614288011dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037665 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxzr\" (UniqueName: \"kubernetes.io/projected/d43ea280-40c0-430e-8d12-41a3522f4f29-kube-api-access-xpxzr\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037739 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.037767 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.038393 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.038493 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d43ea280-40c0-430e-8d12-41a3522f4f29-config-data\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.039136 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d43ea280-40c0-430e-8d12-41a3522f4f29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.044753 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.052363 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d43ea280-40c0-430e-8d12-41a3522f4f29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.052639 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d43ea280-40c0-430e-8d12-41a3522f4f29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.052785 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.052837 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3035ed83486c60d032b5a441bf8b23828742cdd52b6aade92b200a795655bf3e/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.061861 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxzr\" (UniqueName: \"kubernetes.io/projected/d43ea280-40c0-430e-8d12-41a3522f4f29-kube-api-access-xpxzr\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.139001 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38ab41ea-5004-4a99-bb29-739bde4c1520\") pod \"rabbitmq-server-0\" (UID: \"d43ea280-40c0-430e-8d12-41a3522f4f29\") " pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.139761 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.139813 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1267de00-e6b5-4340-b2e4-5614288011dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.139866 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.139915 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1267de00-e6b5-4340-b2e4-5614288011dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.139949 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.139991 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1267de00-e6b5-4340-b2e4-5614288011dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.140013 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1267de00-e6b5-4340-b2e4-5614288011dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.140045 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.140104 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1267de00-e6b5-4340-b2e4-5614288011dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.140129 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.140145 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hzvp\" (UniqueName: \"kubernetes.io/projected/1267de00-e6b5-4340-b2e4-5614288011dc-kube-api-access-2hzvp\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.142090 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1267de00-e6b5-4340-b2e4-5614288011dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.142609 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1267de00-e6b5-4340-b2e4-5614288011dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.146867 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1267de00-e6b5-4340-b2e4-5614288011dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.149829 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1267de00-e6b5-4340-b2e4-5614288011dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.151909 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1267de00-e6b5-4340-b2e4-5614288011dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.152308 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.152536 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.153717 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.160854 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1267de00-e6b5-4340-b2e4-5614288011dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.161318 4910 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.161358 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/728d4105af9584126f0fc1a781d59af315837d78da67022a7916c5e8477a32ea/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.166989 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.170291 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hzvp\" (UniqueName: \"kubernetes.io/projected/1267de00-e6b5-4340-b2e4-5614288011dc-kube-api-access-2hzvp\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.312788 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4973784-2c8e-4725-bfc4-64e42ff04268\") pod \"rabbitmq-cell1-server-0\" (UID: \"1267de00-e6b5-4340-b2e4-5614288011dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.337351 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dbf00f-5cf4-4400-8eb3-f861fadda173","Type":"ContainerStarted","Data":"e3fa1fb89c5224e3b80847f2acdf1bf4f51e55b48ca0408b709adf6a0055195a"} Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.339547 4910 generic.go:334] "Generic (PLEG): container finished" podID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerID="95753abeb95bd8c5572abc6e0b17c0930c983265d1f0f20e015753595af688ae" exitCode=143 Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.339622 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1d7bbc64-bf64-4c75-beb1-ce50a75b3724","Type":"ContainerDied","Data":"95753abeb95bd8c5572abc6e0b17c0930c983265d1f0f20e015753595af688ae"} Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.512442 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.560714 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-29l2r"] Feb 26 22:21:14 crc kubenswrapper[4910]: I0226 22:21:14.892936 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 22:21:14 crc kubenswrapper[4910]: W0226 22:21:14.896245 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd43ea280_40c0_430e_8d12_41a3522f4f29.slice/crio-b830ac529e6683e4c4e6e69a0f8f1e52bc7a682004c2536b55a70ed656da7b17 WatchSource:0}: Error finding container b830ac529e6683e4c4e6e69a0f8f1e52bc7a682004c2536b55a70ed656da7b17: Status 404 returned error can't find the container with id b830ac529e6683e4c4e6e69a0f8f1e52bc7a682004c2536b55a70ed656da7b17 Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.115140 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.281477 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.366937 4910 generic.go:334] "Generic (PLEG): container finished" podID="86c9ac9a-e5f3-44c9-b009-c174b942eb90" containerID="9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557" exitCode=0 Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.367009 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" event={"ID":"86c9ac9a-e5f3-44c9-b009-c174b942eb90","Type":"ContainerDied","Data":"9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557"} Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.367039 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" event={"ID":"86c9ac9a-e5f3-44c9-b009-c174b942eb90","Type":"ContainerStarted","Data":"8aa36aeeea1e3e48bad645b7a410e2ad3fc17ddd4b577dca1861c82ad74dd225"} Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.401308 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dbf00f-5cf4-4400-8eb3-f861fadda173","Type":"ContainerStarted","Data":"414f4cf0cd6b15e12fe94da8040f52d89738b7553582baf950cde2baa221e50d"} Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.410537 4910 generic.go:334] "Generic (PLEG): container finished" podID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerID="d65dd4846bab7638b3db1bfe7b618eefd48c2ccb4dffc59c7c3fc6677f663022" exitCode=0 Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.410578 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1d7bbc64-bf64-4c75-beb1-ce50a75b3724","Type":"ContainerDied","Data":"d65dd4846bab7638b3db1bfe7b618eefd48c2ccb4dffc59c7c3fc6677f663022"} Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.414765 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d43ea280-40c0-430e-8d12-41a3522f4f29","Type":"ContainerStarted","Data":"b830ac529e6683e4c4e6e69a0f8f1e52bc7a682004c2536b55a70ed656da7b17"} Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.415827 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1267de00-e6b5-4340-b2e4-5614288011dc","Type":"ContainerStarted","Data":"d4176ffac8d6171a105cec2134aa23cd3f1d07013151712c0a4c4dac0ed91881"} Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.418591 4910 generic.go:334] "Generic (PLEG): container finished" podID="6d3b6f00-440e-46b6-a08a-a3219e244da6" containerID="4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6" exitCode=0 Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.418642 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"6d3b6f00-440e-46b6-a08a-a3219e244da6","Type":"ContainerDied","Data":"4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6"} Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.418658 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"6d3b6f00-440e-46b6-a08a-a3219e244da6","Type":"ContainerDied","Data":"6add6f037b275386fb2ecf7903290d89581906fd2507547f8ed14cbd022a1302"} Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.418674 4910 scope.go:117] "RemoveContainer" containerID="4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.418811 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.473462 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-certs\") pod \"6d3b6f00-440e-46b6-a08a-a3219e244da6\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.473691 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td59p\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-kube-api-access-td59p\") pod \"6d3b6f00-440e-46b6-a08a-a3219e244da6\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.473743 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data-custom\") pod \"6d3b6f00-440e-46b6-a08a-a3219e244da6\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.473838 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-combined-ca-bundle\") pod \"6d3b6f00-440e-46b6-a08a-a3219e244da6\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.473859 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data\") pod \"6d3b6f00-440e-46b6-a08a-a3219e244da6\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.473898 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-scripts\") pod \"6d3b6f00-440e-46b6-a08a-a3219e244da6\" (UID: \"6d3b6f00-440e-46b6-a08a-a3219e244da6\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.481055 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-kube-api-access-td59p" (OuterVolumeSpecName: "kube-api-access-td59p") pod "6d3b6f00-440e-46b6-a08a-a3219e244da6" (UID: "6d3b6f00-440e-46b6-a08a-a3219e244da6"). InnerVolumeSpecName "kube-api-access-td59p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.480802 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-certs" (OuterVolumeSpecName: "certs") pod "6d3b6f00-440e-46b6-a08a-a3219e244da6" (UID: "6d3b6f00-440e-46b6-a08a-a3219e244da6"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.490250 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6d3b6f00-440e-46b6-a08a-a3219e244da6" (UID: "6d3b6f00-440e-46b6-a08a-a3219e244da6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.491228 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-scripts" (OuterVolumeSpecName: "scripts") pod "6d3b6f00-440e-46b6-a08a-a3219e244da6" (UID: "6d3b6f00-440e-46b6-a08a-a3219e244da6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.520530 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data" (OuterVolumeSpecName: "config-data") pod "6d3b6f00-440e-46b6-a08a-a3219e244da6" (UID: "6d3b6f00-440e-46b6-a08a-a3219e244da6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.521913 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d3b6f00-440e-46b6-a08a-a3219e244da6" (UID: "6d3b6f00-440e-46b6-a08a-a3219e244da6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.584811 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.584854 4910 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.584867 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td59p\" (UniqueName: \"kubernetes.io/projected/6d3b6f00-440e-46b6-a08a-a3219e244da6-kube-api-access-td59p\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.584884 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.584897 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.584909 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3b6f00-440e-46b6-a08a-a3219e244da6-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.624834 4910 scope.go:117] "RemoveContainer" containerID="4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6" Feb 26 22:21:15 crc kubenswrapper[4910]: E0226 22:21:15.625485 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6\": container with ID starting with 4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6 not found: ID does not exist" containerID="4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.625519 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6"} err="failed to get container status \"4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6\": rpc error: code = NotFound desc = could not find container \"4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6\": container with ID starting with 4267f4b88f0c355f7e2cced4125c0fd1b7ddf380608dc56dba2d27282b64e1d6 not found: ID does not exist" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.631939 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.753965 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.764659 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.783302 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:21:15 crc kubenswrapper[4910]: E0226 22:21:15.794038 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerName="cloudkitty-api-log" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.794073 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerName="cloudkitty-api-log" Feb 26 22:21:15 crc kubenswrapper[4910]: E0226 22:21:15.794096 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerName="cloudkitty-api" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.794103 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerName="cloudkitty-api" Feb 26 22:21:15 crc kubenswrapper[4910]: E0226 22:21:15.794136 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3b6f00-440e-46b6-a08a-a3219e244da6" containerName="cloudkitty-proc" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.794142 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3b6f00-440e-46b6-a08a-a3219e244da6" containerName="cloudkitty-proc" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.794849 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerName="cloudkitty-api-log" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.794869 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" containerName="cloudkitty-api" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.794891 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3b6f00-440e-46b6-a08a-a3219e244da6" containerName="cloudkitty-proc" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.795763 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-logs\") pod \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.795840 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-public-tls-certs\") pod \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.795910 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-internal-tls-certs\") pod \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.795948 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data\") pod \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.795982 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data-custom\") pod \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.795985 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.796122 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-certs\") pod \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.796205 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-scripts\") pod \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.796259 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbt9s\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-kube-api-access-xbt9s\") pod \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.796319 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-combined-ca-bundle\") pod \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\" (UID: \"1d7bbc64-bf64-4c75-beb1-ce50a75b3724\") " Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.798767 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-logs" (OuterVolumeSpecName: "logs") pod "1d7bbc64-bf64-4c75-beb1-ce50a75b3724" (UID: "1d7bbc64-bf64-4c75-beb1-ce50a75b3724"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.801410 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.810942 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.893016 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-kube-api-access-xbt9s" (OuterVolumeSpecName: "kube-api-access-xbt9s") pod "1d7bbc64-bf64-4c75-beb1-ce50a75b3724" (UID: "1d7bbc64-bf64-4c75-beb1-ce50a75b3724"). InnerVolumeSpecName "kube-api-access-xbt9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.898711 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/16b7305a-e063-4e97-b224-61fe0116227f-certs\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.900655 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.900932 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-scripts\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.901106 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6vrb\" (UniqueName: \"kubernetes.io/projected/16b7305a-e063-4e97-b224-61fe0116227f-kube-api-access-x6vrb\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.901315 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-config-data\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.901500 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.901818 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbt9s\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-kube-api-access-xbt9s\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.901916 4910 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-logs\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.916324 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-certs" (OuterVolumeSpecName: "certs") pod "1d7bbc64-bf64-4c75-beb1-ce50a75b3724" (UID: "1d7bbc64-bf64-4c75-beb1-ce50a75b3724"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.916755 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3b6f00-440e-46b6-a08a-a3219e244da6" path="/var/lib/kubelet/pods/6d3b6f00-440e-46b6-a08a-a3219e244da6/volumes" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.919481 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-scripts" (OuterVolumeSpecName: "scripts") pod "1d7bbc64-bf64-4c75-beb1-ce50a75b3724" (UID: "1d7bbc64-bf64-4c75-beb1-ce50a75b3724"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:15 crc kubenswrapper[4910]: I0226 22:21:15.924888 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d7bbc64-bf64-4c75-beb1-ce50a75b3724" (UID: "1d7bbc64-bf64-4c75-beb1-ce50a75b3724"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.003406 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/16b7305a-e063-4e97-b224-61fe0116227f-certs\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.003459 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.003566 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-scripts\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.003619 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6vrb\" (UniqueName: \"kubernetes.io/projected/16b7305a-e063-4e97-b224-61fe0116227f-kube-api-access-x6vrb\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.003694 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-config-data\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.003715 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.003852 4910 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.003867 4910 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.003876 4910 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.007413 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-scripts\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.007857 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.008430 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.010341 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/16b7305a-e063-4e97-b224-61fe0116227f-certs\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.014448 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-config-data\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.021577 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6vrb\" (UniqueName: \"kubernetes.io/projected/16b7305a-e063-4e97-b224-61fe0116227f-kube-api-access-x6vrb\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.022041 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16b7305a-e063-4e97-b224-61fe0116227f-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"16b7305a-e063-4e97-b224-61fe0116227f\") " pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.108760 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1d7bbc64-bf64-4c75-beb1-ce50a75b3724" (UID: "1d7bbc64-bf64-4c75-beb1-ce50a75b3724"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.127362 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.207186 4910 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.262250 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data" (OuterVolumeSpecName: "config-data") pod "1d7bbc64-bf64-4c75-beb1-ce50a75b3724" (UID: "1d7bbc64-bf64-4c75-beb1-ce50a75b3724"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.270470 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1d7bbc64-bf64-4c75-beb1-ce50a75b3724" (UID: "1d7bbc64-bf64-4c75-beb1-ce50a75b3724"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.271731 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d7bbc64-bf64-4c75-beb1-ce50a75b3724" (UID: "1d7bbc64-bf64-4c75-beb1-ce50a75b3724"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.309530 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.309570 4910 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.309583 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7bbc64-bf64-4c75-beb1-ce50a75b3724-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.431536 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1d7bbc64-bf64-4c75-beb1-ce50a75b3724","Type":"ContainerDied","Data":"9ccded9a5d165e29a24c8f57e7ccc9c974019172565af74fbd3c69ae15bcd1cb"} Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.431816 4910 scope.go:117] "RemoveContainer" containerID="d65dd4846bab7638b3db1bfe7b618eefd48c2ccb4dffc59c7c3fc6677f663022" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.431644 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.447407 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" event={"ID":"86c9ac9a-e5f3-44c9-b009-c174b942eb90","Type":"ContainerStarted","Data":"1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34"} Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.447645 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.472918 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" podStartSLOduration=3.472899998 podStartE2EDuration="3.472899998s" podCreationTimestamp="2026-02-26 22:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:21:16.463323646 +0000 UTC m=+1561.542814197" watchObservedRunningTime="2026-02-26 22:21:16.472899998 +0000 UTC m=+1561.552390539" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.590994 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.603229 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.603283 4910 scope.go:117] "RemoveContainer" containerID="95753abeb95bd8c5572abc6e0b17c0930c983265d1f0f20e015753595af688ae" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.629551 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.660268 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.662280 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.665284 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.665602 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.665769 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.673018 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.824112 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/91580ce8-a4ad-44bd-99b3-0d7b077d0401-certs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.824414 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.824448 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-scripts\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.824480 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91580ce8-a4ad-44bd-99b3-0d7b077d0401-logs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.824524 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.824541 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.824770 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.824909 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-config-data\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.825002 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4scbg\" (UniqueName: \"kubernetes.io/projected/91580ce8-a4ad-44bd-99b3-0d7b077d0401-kube-api-access-4scbg\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.927403 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.927457 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-scripts\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.927492 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91580ce8-a4ad-44bd-99b3-0d7b077d0401-logs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.927543 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.927562 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.927605 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.927638 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-config-data\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.927666 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4scbg\" (UniqueName: \"kubernetes.io/projected/91580ce8-a4ad-44bd-99b3-0d7b077d0401-kube-api-access-4scbg\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.927698 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/91580ce8-a4ad-44bd-99b3-0d7b077d0401-certs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.928722 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91580ce8-a4ad-44bd-99b3-0d7b077d0401-logs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.931555 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/91580ce8-a4ad-44bd-99b3-0d7b077d0401-certs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.932028 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.932260 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.934044 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-scripts\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.934474 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.934778 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.941477 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91580ce8-a4ad-44bd-99b3-0d7b077d0401-config-data\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:16 crc kubenswrapper[4910]: I0226 22:21:16.947259 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4scbg\" (UniqueName: \"kubernetes.io/projected/91580ce8-a4ad-44bd-99b3-0d7b077d0401-kube-api-access-4scbg\") pod \"cloudkitty-api-0\" (UID: \"91580ce8-a4ad-44bd-99b3-0d7b077d0401\") " pod="openstack/cloudkitty-api-0" Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.076950 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.461015 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d43ea280-40c0-430e-8d12-41a3522f4f29","Type":"ContainerStarted","Data":"2f18805f188d2ea6b8f2573dba8fbd4eb3a39160e08dfaedf19a2e8aa45c39dd"} Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.463403 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"16b7305a-e063-4e97-b224-61fe0116227f","Type":"ContainerStarted","Data":"d447996fc22e529e421d9faf0d7965a88d95631e8f5984dd6b5861ff1112ad21"} Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.463432 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"16b7305a-e063-4e97-b224-61fe0116227f","Type":"ContainerStarted","Data":"cd376f191e074c00ff9c3ef27b76b07ac987068b47c11cbb062ffabac2121974"} Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.468322 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1267de00-e6b5-4340-b2e4-5614288011dc","Type":"ContainerStarted","Data":"84da54014d4f3a8e0a094c9f3f37d510463d17d13b49f082105950f321f4c48d"} Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.475978 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dbf00f-5cf4-4400-8eb3-f861fadda173","Type":"ContainerStarted","Data":"ed812c9d4462c058b99c9ddd9c48c8dbb8aeffb11ecf66d4f631c099a6441a1b"} Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.476201 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.584092 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.593676 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.096057855 podStartE2EDuration="9.593652812s" podCreationTimestamp="2026-02-26 22:21:08 +0000 UTC" firstStartedPulling="2026-02-26 22:21:09.117359324 +0000 UTC m=+1554.196849875" lastFinishedPulling="2026-02-26 22:21:16.614954281 +0000 UTC m=+1561.694444832" observedRunningTime="2026-02-26 22:21:17.566843171 +0000 UTC m=+1562.646333722" watchObservedRunningTime="2026-02-26 22:21:17.593652812 +0000 UTC m=+1562.673143353" Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.598314 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.453485011 podStartE2EDuration="2.598298838s" podCreationTimestamp="2026-02-26 22:21:15 +0000 UTC" firstStartedPulling="2026-02-26 22:21:16.653033959 +0000 UTC m=+1561.732524500" lastFinishedPulling="2026-02-26 22:21:16.797847786 +0000 UTC m=+1561.877338327" observedRunningTime="2026-02-26 22:21:17.592246714 +0000 UTC m=+1562.671737255" watchObservedRunningTime="2026-02-26 22:21:17.598298838 +0000 UTC m=+1562.677789379" Feb 26 22:21:17 crc kubenswrapper[4910]: I0226 22:21:17.915507 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7bbc64-bf64-4c75-beb1-ce50a75b3724" path="/var/lib/kubelet/pods/1d7bbc64-bf64-4c75-beb1-ce50a75b3724/volumes" Feb 26 22:21:18 crc kubenswrapper[4910]: I0226 22:21:18.495316 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"91580ce8-a4ad-44bd-99b3-0d7b077d0401","Type":"ContainerStarted","Data":"5183339fe28f491dc90bde629b6f5815a1cb3b42a635339216bfcb620238e17c"} Feb 26 22:21:18 crc kubenswrapper[4910]: I0226 22:21:18.496754 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 26 22:21:18 crc kubenswrapper[4910]: I0226 22:21:18.496847 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"91580ce8-a4ad-44bd-99b3-0d7b077d0401","Type":"ContainerStarted","Data":"af9e322118b207a416a9be984c2bf2f372a13360f93f56d97d7a1db683465422"} Feb 26 22:21:18 crc kubenswrapper[4910]: I0226 22:21:18.496952 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"91580ce8-a4ad-44bd-99b3-0d7b077d0401","Type":"ContainerStarted","Data":"3e2021825c15d8fde7078460c425943006e60c941f88fa0d8e4d9fb3d8f5f430"} Feb 26 22:21:18 crc kubenswrapper[4910]: I0226 22:21:18.515798 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.51578201 podStartE2EDuration="2.51578201s" podCreationTimestamp="2026-02-26 22:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:21:18.511528064 +0000 UTC m=+1563.591018605" watchObservedRunningTime="2026-02-26 22:21:18.51578201 +0000 UTC m=+1563.595272561" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.020550 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.120286 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-krxqd"] Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.121097 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dd998c-krxqd" podUID="0da5d2b0-ad22-4e31-8b86-50314d9a58e5" containerName="dnsmasq-dns" containerID="cri-o://7d6cf3902342d809dd31797bac884bf38c3e64e5ea5d2a63f804de4388e997f9" gracePeriod=10 Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.305490 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-hlnkc"] Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.307634 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.320573 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-hlnkc"] Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.408634 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.408721 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-config\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.408842 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.408925 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r47mb\" (UniqueName: \"kubernetes.io/projected/401df5b4-c175-4e2d-8174-81ece52943ba-kube-api-access-r47mb\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.409252 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.409330 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.409359 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.512189 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r47mb\" (UniqueName: \"kubernetes.io/projected/401df5b4-c175-4e2d-8174-81ece52943ba-kube-api-access-r47mb\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.512289 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.512321 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.512343 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.512366 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.512393 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-config\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.512477 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.513911 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.514241 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.514923 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.515379 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.515543 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.515895 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/401df5b4-c175-4e2d-8174-81ece52943ba-config\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.537644 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r47mb\" (UniqueName: \"kubernetes.io/projected/401df5b4-c175-4e2d-8174-81ece52943ba-kube-api-access-r47mb\") pod \"dnsmasq-dns-c4b758ff5-hlnkc\" (UID: \"401df5b4-c175-4e2d-8174-81ece52943ba\") " pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.563097 4910 generic.go:334] "Generic (PLEG): container finished" podID="0da5d2b0-ad22-4e31-8b86-50314d9a58e5" containerID="7d6cf3902342d809dd31797bac884bf38c3e64e5ea5d2a63f804de4388e997f9" exitCode=0 Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.563154 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-krxqd" event={"ID":"0da5d2b0-ad22-4e31-8b86-50314d9a58e5","Type":"ContainerDied","Data":"7d6cf3902342d809dd31797bac884bf38c3e64e5ea5d2a63f804de4388e997f9"} Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.641314 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.818087 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.918887 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-sb\") pod \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.918971 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsmmh\" (UniqueName: \"kubernetes.io/projected/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-kube-api-access-vsmmh\") pod \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.918997 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-nb\") pod \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.919163 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-swift-storage-0\") pod \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.919281 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-svc\") pod \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.919355 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-config\") pod \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\" (UID: \"0da5d2b0-ad22-4e31-8b86-50314d9a58e5\") " Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.925812 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-kube-api-access-vsmmh" (OuterVolumeSpecName: "kube-api-access-vsmmh") pod "0da5d2b0-ad22-4e31-8b86-50314d9a58e5" (UID: "0da5d2b0-ad22-4e31-8b86-50314d9a58e5"). InnerVolumeSpecName "kube-api-access-vsmmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:24 crc kubenswrapper[4910]: I0226 22:21:24.996758 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-config" (OuterVolumeSpecName: "config") pod "0da5d2b0-ad22-4e31-8b86-50314d9a58e5" (UID: "0da5d2b0-ad22-4e31-8b86-50314d9a58e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.002467 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0da5d2b0-ad22-4e31-8b86-50314d9a58e5" (UID: "0da5d2b0-ad22-4e31-8b86-50314d9a58e5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.002585 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0da5d2b0-ad22-4e31-8b86-50314d9a58e5" (UID: "0da5d2b0-ad22-4e31-8b86-50314d9a58e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.002465 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0da5d2b0-ad22-4e31-8b86-50314d9a58e5" (UID: "0da5d2b0-ad22-4e31-8b86-50314d9a58e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.004890 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0da5d2b0-ad22-4e31-8b86-50314d9a58e5" (UID: "0da5d2b0-ad22-4e31-8b86-50314d9a58e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.023799 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.023828 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.023838 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.023849 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsmmh\" (UniqueName: \"kubernetes.io/projected/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-kube-api-access-vsmmh\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.023857 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.023866 4910 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0da5d2b0-ad22-4e31-8b86-50314d9a58e5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.116444 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-hlnkc"] Feb 26 22:21:25 crc kubenswrapper[4910]: W0226 22:21:25.116703 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod401df5b4_c175_4e2d_8174_81ece52943ba.slice/crio-0216459db62f54f6b7b14a6c98ec08a802e8f054fb7cf58a6b1a540e5aee3116 WatchSource:0}: Error finding container 0216459db62f54f6b7b14a6c98ec08a802e8f054fb7cf58a6b1a540e5aee3116: Status 404 returned error can't find the container with id 0216459db62f54f6b7b14a6c98ec08a802e8f054fb7cf58a6b1a540e5aee3116 Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.576647 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-krxqd" event={"ID":"0da5d2b0-ad22-4e31-8b86-50314d9a58e5","Type":"ContainerDied","Data":"05b7d915a1f192c47e9764736356ca699d837fa8eb5496b7094bf07bcb482bfb"} Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.576736 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-krxqd" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.577032 4910 scope.go:117] "RemoveContainer" containerID="7d6cf3902342d809dd31797bac884bf38c3e64e5ea5d2a63f804de4388e997f9" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.581428 4910 generic.go:334] "Generic (PLEG): container finished" podID="401df5b4-c175-4e2d-8174-81ece52943ba" containerID="7a69b8a1f62a20f2e3ecf8454c7df211a1dd1b70469ca6503384baef7d715fa4" exitCode=0 Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.581471 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" event={"ID":"401df5b4-c175-4e2d-8174-81ece52943ba","Type":"ContainerDied","Data":"7a69b8a1f62a20f2e3ecf8454c7df211a1dd1b70469ca6503384baef7d715fa4"} Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.581499 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" event={"ID":"401df5b4-c175-4e2d-8174-81ece52943ba","Type":"ContainerStarted","Data":"0216459db62f54f6b7b14a6c98ec08a802e8f054fb7cf58a6b1a540e5aee3116"} Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.617784 4910 scope.go:117] "RemoveContainer" containerID="3c7cb016847185639a54181751c9db96c494b458d740cc801895794da62c7eb4" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.727507 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.727563 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.727600 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.728330 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9e7e9afe0afc45cb3107605182e65bb0e883988c1f2cfa35e317e7033cca07c"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.728388 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://c9e7e9afe0afc45cb3107605182e65bb0e883988c1f2cfa35e317e7033cca07c" gracePeriod=600 Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.842704 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-krxqd"] Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.854821 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-krxqd"] Feb 26 22:21:25 crc kubenswrapper[4910]: I0226 22:21:25.913688 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da5d2b0-ad22-4e31-8b86-50314d9a58e5" path="/var/lib/kubelet/pods/0da5d2b0-ad22-4e31-8b86-50314d9a58e5/volumes" Feb 26 22:21:26 crc kubenswrapper[4910]: I0226 22:21:26.596512 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="c9e7e9afe0afc45cb3107605182e65bb0e883988c1f2cfa35e317e7033cca07c" exitCode=0 Feb 26 22:21:26 crc kubenswrapper[4910]: I0226 22:21:26.596556 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"c9e7e9afe0afc45cb3107605182e65bb0e883988c1f2cfa35e317e7033cca07c"} Feb 26 22:21:26 crc kubenswrapper[4910]: I0226 22:21:26.596944 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6"} Feb 26 22:21:26 crc kubenswrapper[4910]: I0226 22:21:26.596963 4910 scope.go:117] "RemoveContainer" containerID="86111bdf5fb42a19cad2fb6eff7efddfcb0bd79e217fa1c7fe5451bfc269072f" Feb 26 22:21:26 crc kubenswrapper[4910]: I0226 22:21:26.599287 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" event={"ID":"401df5b4-c175-4e2d-8174-81ece52943ba","Type":"ContainerStarted","Data":"e76d927f16449cc0faf81b35ddea0e2b51c95736935d7093cf2e9a08d64ba24d"} Feb 26 22:21:26 crc kubenswrapper[4910]: I0226 22:21:26.599983 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:26 crc kubenswrapper[4910]: I0226 22:21:26.640882 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" podStartSLOduration=2.6408626330000002 podStartE2EDuration="2.640862633s" podCreationTimestamp="2026-02-26 22:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:21:26.635456286 +0000 UTC m=+1571.714946847" watchObservedRunningTime="2026-02-26 22:21:26.640862633 +0000 UTC m=+1571.720353184" Feb 26 22:21:34 crc kubenswrapper[4910]: I0226 22:21:34.643314 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4b758ff5-hlnkc" Feb 26 22:21:34 crc kubenswrapper[4910]: I0226 22:21:34.725207 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-29l2r"] Feb 26 22:21:34 crc kubenswrapper[4910]: I0226 22:21:34.725519 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" podUID="86c9ac9a-e5f3-44c9-b009-c174b942eb90" containerName="dnsmasq-dns" containerID="cri-o://1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34" gracePeriod=10 Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.299927 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.413390 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-config\") pod \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.413781 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-sb\") pod \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.413914 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm5dt\" (UniqueName: \"kubernetes.io/projected/86c9ac9a-e5f3-44c9-b009-c174b942eb90-kube-api-access-wm5dt\") pod \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.414049 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-nb\") pod \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.414175 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-swift-storage-0\") pod \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.414257 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-openstack-edpm-ipam\") pod \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.414328 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-svc\") pod \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\" (UID: \"86c9ac9a-e5f3-44c9-b009-c174b942eb90\") " Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.424373 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c9ac9a-e5f3-44c9-b009-c174b942eb90-kube-api-access-wm5dt" (OuterVolumeSpecName: "kube-api-access-wm5dt") pod "86c9ac9a-e5f3-44c9-b009-c174b942eb90" (UID: "86c9ac9a-e5f3-44c9-b009-c174b942eb90"). InnerVolumeSpecName "kube-api-access-wm5dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.473350 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "86c9ac9a-e5f3-44c9-b009-c174b942eb90" (UID: "86c9ac9a-e5f3-44c9-b009-c174b942eb90"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.499336 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "86c9ac9a-e5f3-44c9-b009-c174b942eb90" (UID: "86c9ac9a-e5f3-44c9-b009-c174b942eb90"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.501779 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "86c9ac9a-e5f3-44c9-b009-c174b942eb90" (UID: "86c9ac9a-e5f3-44c9-b009-c174b942eb90"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.501982 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "86c9ac9a-e5f3-44c9-b009-c174b942eb90" (UID: "86c9ac9a-e5f3-44c9-b009-c174b942eb90"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.507052 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-config" (OuterVolumeSpecName: "config") pod "86c9ac9a-e5f3-44c9-b009-c174b942eb90" (UID: "86c9ac9a-e5f3-44c9-b009-c174b942eb90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.517478 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.517517 4910 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.517529 4910 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.517543 4910 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.517556 4910 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.517567 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm5dt\" (UniqueName: \"kubernetes.io/projected/86c9ac9a-e5f3-44c9-b009-c174b942eb90-kube-api-access-wm5dt\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.519622 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86c9ac9a-e5f3-44c9-b009-c174b942eb90" (UID: "86c9ac9a-e5f3-44c9-b009-c174b942eb90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.619193 4910 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86c9ac9a-e5f3-44c9-b009-c174b942eb90-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.722568 4910 generic.go:334] "Generic (PLEG): container finished" podID="86c9ac9a-e5f3-44c9-b009-c174b942eb90" containerID="1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34" exitCode=0 Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.722613 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" event={"ID":"86c9ac9a-e5f3-44c9-b009-c174b942eb90","Type":"ContainerDied","Data":"1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34"} Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.722641 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" event={"ID":"86c9ac9a-e5f3-44c9-b009-c174b942eb90","Type":"ContainerDied","Data":"8aa36aeeea1e3e48bad645b7a410e2ad3fc17ddd4b577dca1861c82ad74dd225"} Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.722657 4910 scope.go:117] "RemoveContainer" containerID="1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.722847 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-29l2r" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.775764 4910 scope.go:117] "RemoveContainer" containerID="9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.779765 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-29l2r"] Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.793691 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-29l2r"] Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.801137 4910 scope.go:117] "RemoveContainer" containerID="1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34" Feb 26 22:21:35 crc kubenswrapper[4910]: E0226 22:21:35.801576 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34\": container with ID starting with 1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34 not found: ID does not exist" containerID="1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.801619 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34"} err="failed to get container status \"1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34\": rpc error: code = NotFound desc = could not find container \"1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34\": container with ID starting with 1dae9e80db527ec5f29b37528f8ffb3740b72c850fcee12846e8bd6ed25cce34 not found: ID does not exist" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.801657 4910 scope.go:117] "RemoveContainer" containerID="9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557" Feb 26 22:21:35 crc kubenswrapper[4910]: E0226 22:21:35.802011 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557\": container with ID starting with 9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557 not found: ID does not exist" containerID="9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.802043 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557"} err="failed to get container status \"9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557\": rpc error: code = NotFound desc = could not find container \"9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557\": container with ID starting with 9a070ebe844b1d8692b6747fc3ff20403ec88c15e9cf280bacd1f9af5b780557 not found: ID does not exist" Feb 26 22:21:35 crc kubenswrapper[4910]: I0226 22:21:35.921695 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86c9ac9a-e5f3-44c9-b009-c174b942eb90" path="/var/lib/kubelet/pods/86c9ac9a-e5f3-44c9-b009-c174b942eb90/volumes" Feb 26 22:21:38 crc kubenswrapper[4910]: I0226 22:21:38.648261 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.930305 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl"] Feb 26 22:21:47 crc kubenswrapper[4910]: E0226 22:21:47.931023 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da5d2b0-ad22-4e31-8b86-50314d9a58e5" containerName="init" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.931033 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da5d2b0-ad22-4e31-8b86-50314d9a58e5" containerName="init" Feb 26 22:21:47 crc kubenswrapper[4910]: E0226 22:21:47.931051 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da5d2b0-ad22-4e31-8b86-50314d9a58e5" containerName="dnsmasq-dns" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.931057 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da5d2b0-ad22-4e31-8b86-50314d9a58e5" containerName="dnsmasq-dns" Feb 26 22:21:47 crc kubenswrapper[4910]: E0226 22:21:47.931078 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c9ac9a-e5f3-44c9-b009-c174b942eb90" containerName="dnsmasq-dns" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.931083 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c9ac9a-e5f3-44c9-b009-c174b942eb90" containerName="dnsmasq-dns" Feb 26 22:21:47 crc kubenswrapper[4910]: E0226 22:21:47.931097 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c9ac9a-e5f3-44c9-b009-c174b942eb90" containerName="init" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.931102 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c9ac9a-e5f3-44c9-b009-c174b942eb90" containerName="init" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.931302 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da5d2b0-ad22-4e31-8b86-50314d9a58e5" containerName="dnsmasq-dns" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.931315 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c9ac9a-e5f3-44c9-b009-c174b942eb90" containerName="dnsmasq-dns" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.931992 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.937287 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.937407 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.937458 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.937599 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:21:47 crc kubenswrapper[4910]: I0226 22:21:47.939007 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl"] Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.010909 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.011216 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.011444 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5rwb\" (UniqueName: \"kubernetes.io/projected/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-kube-api-access-z5rwb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.011553 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.113248 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.113412 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5rwb\" (UniqueName: \"kubernetes.io/projected/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-kube-api-access-z5rwb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.113504 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.113619 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.124486 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.125536 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.125549 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.137513 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5rwb\" (UniqueName: \"kubernetes.io/projected/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-kube-api-access-z5rwb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.265946 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.883566 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl"] Feb 26 22:21:48 crc kubenswrapper[4910]: I0226 22:21:48.897488 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:21:49 crc kubenswrapper[4910]: I0226 22:21:49.905994 4910 generic.go:334] "Generic (PLEG): container finished" podID="d43ea280-40c0-430e-8d12-41a3522f4f29" containerID="2f18805f188d2ea6b8f2573dba8fbd4eb3a39160e08dfaedf19a2e8aa45c39dd" exitCode=0 Feb 26 22:21:49 crc kubenswrapper[4910]: I0226 22:21:49.909497 4910 generic.go:334] "Generic (PLEG): container finished" podID="1267de00-e6b5-4340-b2e4-5614288011dc" containerID="84da54014d4f3a8e0a094c9f3f37d510463d17d13b49f082105950f321f4c48d" exitCode=0 Feb 26 22:21:49 crc kubenswrapper[4910]: I0226 22:21:49.914817 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" event={"ID":"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a","Type":"ContainerStarted","Data":"e4109f2c025f913c693fc3f60edf53bcc0475eda0d24f1055a39c7eabdb18aae"} Feb 26 22:21:49 crc kubenswrapper[4910]: I0226 22:21:49.914857 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d43ea280-40c0-430e-8d12-41a3522f4f29","Type":"ContainerDied","Data":"2f18805f188d2ea6b8f2573dba8fbd4eb3a39160e08dfaedf19a2e8aa45c39dd"} Feb 26 22:21:49 crc kubenswrapper[4910]: I0226 22:21:49.914887 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1267de00-e6b5-4340-b2e4-5614288011dc","Type":"ContainerDied","Data":"84da54014d4f3a8e0a094c9f3f37d510463d17d13b49f082105950f321f4c48d"} Feb 26 22:21:50 crc kubenswrapper[4910]: I0226 22:21:50.938422 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d43ea280-40c0-430e-8d12-41a3522f4f29","Type":"ContainerStarted","Data":"98821dad1ed0d486e584ee486adc098c699e4f841de9a5f98c7c6491882f6f77"} Feb 26 22:21:50 crc kubenswrapper[4910]: I0226 22:21:50.939632 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 22:21:50 crc kubenswrapper[4910]: I0226 22:21:50.944036 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1267de00-e6b5-4340-b2e4-5614288011dc","Type":"ContainerStarted","Data":"ffef48dda9c305b9ad81d837f08b0922b2111f71e0601814e2d8da09b1674476"} Feb 26 22:21:50 crc kubenswrapper[4910]: I0226 22:21:50.944565 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:21:50 crc kubenswrapper[4910]: I0226 22:21:50.969385 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.969368768 podStartE2EDuration="37.969368768s" podCreationTimestamp="2026-02-26 22:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:21:50.966057928 +0000 UTC m=+1596.045548479" watchObservedRunningTime="2026-02-26 22:21:50.969368768 +0000 UTC m=+1596.048859309" Feb 26 22:21:50 crc kubenswrapper[4910]: I0226 22:21:50.992993 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.992977141 podStartE2EDuration="37.992977141s" podCreationTimestamp="2026-02-26 22:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:21:50.985667572 +0000 UTC m=+1596.065158123" watchObservedRunningTime="2026-02-26 22:21:50.992977141 +0000 UTC m=+1596.072467682" Feb 26 22:21:53 crc kubenswrapper[4910]: I0226 22:21:53.734196 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.001080 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k2qkq"] Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.004308 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.018347 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k2qkq"] Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.055370 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-utilities\") pod \"certified-operators-k2qkq\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.055753 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghvqb\" (UniqueName: \"kubernetes.io/projected/6e054fd7-034c-43f0-a193-712292073151-kube-api-access-ghvqb\") pod \"certified-operators-k2qkq\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.055806 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-catalog-content\") pod \"certified-operators-k2qkq\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.158015 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-utilities\") pod \"certified-operators-k2qkq\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.158154 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghvqb\" (UniqueName: \"kubernetes.io/projected/6e054fd7-034c-43f0-a193-712292073151-kube-api-access-ghvqb\") pod \"certified-operators-k2qkq\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.158209 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-catalog-content\") pod \"certified-operators-k2qkq\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.158694 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-catalog-content\") pod \"certified-operators-k2qkq\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.158956 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-utilities\") pod \"certified-operators-k2qkq\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.183102 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghvqb\" (UniqueName: \"kubernetes.io/projected/6e054fd7-034c-43f0-a193-712292073151-kube-api-access-ghvqb\") pod \"certified-operators-k2qkq\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.372317 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:21:59 crc kubenswrapper[4910]: I0226 22:21:59.894706 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k2qkq"] Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.072712 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" event={"ID":"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a","Type":"ContainerStarted","Data":"7afe256a8954a3ddf14dc6da859b43e6cf7bb1e0f8b38afa0dc5f0d39262426d"} Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.074936 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2qkq" event={"ID":"6e054fd7-034c-43f0-a193-712292073151","Type":"ContainerStarted","Data":"e0ebefbf8e00806875a6ba4b7d981f18b4ad96d019bb0020d7e7fc21fd8a0759"} Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.123957 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" podStartSLOduration=4.17562544 podStartE2EDuration="13.123936936s" podCreationTimestamp="2026-02-26 22:21:47 +0000 UTC" firstStartedPulling="2026-02-26 22:21:48.89694196 +0000 UTC m=+1593.976432531" lastFinishedPulling="2026-02-26 22:21:57.845253486 +0000 UTC m=+1602.924744027" observedRunningTime="2026-02-26 22:22:00.121910712 +0000 UTC m=+1605.201401253" watchObservedRunningTime="2026-02-26 22:22:00.123936936 +0000 UTC m=+1605.203427477" Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.155168 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535742-trflz"] Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.156673 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535742-trflz" Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.159676 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.159852 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.159963 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.172868 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535742-trflz"] Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.223722 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrnj\" (UniqueName: \"kubernetes.io/projected/463a94d8-3449-488b-97e1-80044ca36e5a-kube-api-access-mgrnj\") pod \"auto-csr-approver-29535742-trflz\" (UID: \"463a94d8-3449-488b-97e1-80044ca36e5a\") " pod="openshift-infra/auto-csr-approver-29535742-trflz" Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.325565 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgrnj\" (UniqueName: \"kubernetes.io/projected/463a94d8-3449-488b-97e1-80044ca36e5a-kube-api-access-mgrnj\") pod \"auto-csr-approver-29535742-trflz\" (UID: \"463a94d8-3449-488b-97e1-80044ca36e5a\") " pod="openshift-infra/auto-csr-approver-29535742-trflz" Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.358323 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgrnj\" (UniqueName: \"kubernetes.io/projected/463a94d8-3449-488b-97e1-80044ca36e5a-kube-api-access-mgrnj\") pod \"auto-csr-approver-29535742-trflz\" (UID: \"463a94d8-3449-488b-97e1-80044ca36e5a\") " pod="openshift-infra/auto-csr-approver-29535742-trflz" Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.477251 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535742-trflz" Feb 26 22:22:00 crc kubenswrapper[4910]: I0226 22:22:00.995804 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535742-trflz"] Feb 26 22:22:01 crc kubenswrapper[4910]: I0226 22:22:01.089644 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2qkq" event={"ID":"6e054fd7-034c-43f0-a193-712292073151","Type":"ContainerDied","Data":"e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32"} Feb 26 22:22:01 crc kubenswrapper[4910]: I0226 22:22:01.089656 4910 generic.go:334] "Generic (PLEG): container finished" podID="6e054fd7-034c-43f0-a193-712292073151" containerID="e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32" exitCode=0 Feb 26 22:22:01 crc kubenswrapper[4910]: I0226 22:22:01.091179 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535742-trflz" event={"ID":"463a94d8-3449-488b-97e1-80044ca36e5a","Type":"ContainerStarted","Data":"32304d54314394bebdd0ae99576c816f6dabbf1d4e7796de9c0abf2791efbf30"} Feb 26 22:22:02 crc kubenswrapper[4910]: I0226 22:22:02.105046 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2qkq" event={"ID":"6e054fd7-034c-43f0-a193-712292073151","Type":"ContainerStarted","Data":"9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865"} Feb 26 22:22:03 crc kubenswrapper[4910]: I0226 22:22:03.120220 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535742-trflz" event={"ID":"463a94d8-3449-488b-97e1-80044ca36e5a","Type":"ContainerStarted","Data":"ce0f9b7cd90d804dd9d769cec11222c53820d0652e3854cad4a0c9e5a5733757"} Feb 26 22:22:03 crc kubenswrapper[4910]: I0226 22:22:03.138653 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535742-trflz" podStartSLOduration=2.291383895 podStartE2EDuration="3.138621422s" podCreationTimestamp="2026-02-26 22:22:00 +0000 UTC" firstStartedPulling="2026-02-26 22:22:00.988875166 +0000 UTC m=+1606.068365747" lastFinishedPulling="2026-02-26 22:22:01.836112733 +0000 UTC m=+1606.915603274" observedRunningTime="2026-02-26 22:22:03.136859024 +0000 UTC m=+1608.216349645" watchObservedRunningTime="2026-02-26 22:22:03.138621422 +0000 UTC m=+1608.218112003" Feb 26 22:22:04 crc kubenswrapper[4910]: I0226 22:22:04.133066 4910 generic.go:334] "Generic (PLEG): container finished" podID="463a94d8-3449-488b-97e1-80044ca36e5a" containerID="ce0f9b7cd90d804dd9d769cec11222c53820d0652e3854cad4a0c9e5a5733757" exitCode=0 Feb 26 22:22:04 crc kubenswrapper[4910]: I0226 22:22:04.133207 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535742-trflz" event={"ID":"463a94d8-3449-488b-97e1-80044ca36e5a","Type":"ContainerDied","Data":"ce0f9b7cd90d804dd9d769cec11222c53820d0652e3854cad4a0c9e5a5733757"} Feb 26 22:22:04 crc kubenswrapper[4910]: I0226 22:22:04.170401 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 22:22:04 crc kubenswrapper[4910]: I0226 22:22:04.515315 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 22:22:05 crc kubenswrapper[4910]: I0226 22:22:05.575354 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535742-trflz" Feb 26 22:22:05 crc kubenswrapper[4910]: I0226 22:22:05.649857 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgrnj\" (UniqueName: \"kubernetes.io/projected/463a94d8-3449-488b-97e1-80044ca36e5a-kube-api-access-mgrnj\") pod \"463a94d8-3449-488b-97e1-80044ca36e5a\" (UID: \"463a94d8-3449-488b-97e1-80044ca36e5a\") " Feb 26 22:22:05 crc kubenswrapper[4910]: I0226 22:22:05.660588 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463a94d8-3449-488b-97e1-80044ca36e5a-kube-api-access-mgrnj" (OuterVolumeSpecName: "kube-api-access-mgrnj") pod "463a94d8-3449-488b-97e1-80044ca36e5a" (UID: "463a94d8-3449-488b-97e1-80044ca36e5a"). InnerVolumeSpecName "kube-api-access-mgrnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:22:05 crc kubenswrapper[4910]: I0226 22:22:05.752123 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgrnj\" (UniqueName: \"kubernetes.io/projected/463a94d8-3449-488b-97e1-80044ca36e5a-kube-api-access-mgrnj\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:06 crc kubenswrapper[4910]: I0226 22:22:06.156867 4910 generic.go:334] "Generic (PLEG): container finished" podID="6e054fd7-034c-43f0-a193-712292073151" containerID="9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865" exitCode=0 Feb 26 22:22:06 crc kubenswrapper[4910]: I0226 22:22:06.156928 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2qkq" event={"ID":"6e054fd7-034c-43f0-a193-712292073151","Type":"ContainerDied","Data":"9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865"} Feb 26 22:22:06 crc kubenswrapper[4910]: I0226 22:22:06.163216 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535742-trflz" Feb 26 22:22:06 crc kubenswrapper[4910]: I0226 22:22:06.163156 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535742-trflz" event={"ID":"463a94d8-3449-488b-97e1-80044ca36e5a","Type":"ContainerDied","Data":"32304d54314394bebdd0ae99576c816f6dabbf1d4e7796de9c0abf2791efbf30"} Feb 26 22:22:06 crc kubenswrapper[4910]: I0226 22:22:06.163310 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32304d54314394bebdd0ae99576c816f6dabbf1d4e7796de9c0abf2791efbf30" Feb 26 22:22:06 crc kubenswrapper[4910]: I0226 22:22:06.242966 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535736-jb2nz"] Feb 26 22:22:06 crc kubenswrapper[4910]: I0226 22:22:06.272356 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535736-jb2nz"] Feb 26 22:22:07 crc kubenswrapper[4910]: I0226 22:22:07.175968 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2qkq" event={"ID":"6e054fd7-034c-43f0-a193-712292073151","Type":"ContainerStarted","Data":"6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed"} Feb 26 22:22:07 crc kubenswrapper[4910]: I0226 22:22:07.204534 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k2qkq" podStartSLOduration=3.745619316 podStartE2EDuration="9.204514034s" podCreationTimestamp="2026-02-26 22:21:58 +0000 UTC" firstStartedPulling="2026-02-26 22:22:01.091453883 +0000 UTC m=+1606.170944414" lastFinishedPulling="2026-02-26 22:22:06.550348561 +0000 UTC m=+1611.629839132" observedRunningTime="2026-02-26 22:22:07.194968515 +0000 UTC m=+1612.274459086" watchObservedRunningTime="2026-02-26 22:22:07.204514034 +0000 UTC m=+1612.284004575" Feb 26 22:22:07 crc kubenswrapper[4910]: I0226 22:22:07.933405 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99d069cf-0b32-4927-ba6c-367ce8cfa0c5" path="/var/lib/kubelet/pods/99d069cf-0b32-4927-ba6c-367ce8cfa0c5/volumes" Feb 26 22:22:09 crc kubenswrapper[4910]: I0226 22:22:09.373761 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:22:09 crc kubenswrapper[4910]: I0226 22:22:09.374137 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:22:10 crc kubenswrapper[4910]: I0226 22:22:10.255796 4910 generic.go:334] "Generic (PLEG): container finished" podID="34a46ff4-b5ae-4012-bbec-7601cd6a6a5a" containerID="7afe256a8954a3ddf14dc6da859b43e6cf7bb1e0f8b38afa0dc5f0d39262426d" exitCode=0 Feb 26 22:22:10 crc kubenswrapper[4910]: I0226 22:22:10.255910 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" event={"ID":"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a","Type":"ContainerDied","Data":"7afe256a8954a3ddf14dc6da859b43e6cf7bb1e0f8b38afa0dc5f0d39262426d"} Feb 26 22:22:10 crc kubenswrapper[4910]: I0226 22:22:10.460974 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-k2qkq" podUID="6e054fd7-034c-43f0-a193-712292073151" containerName="registry-server" probeResult="failure" output=< Feb 26 22:22:10 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:22:10 crc kubenswrapper[4910]: > Feb 26 22:22:11 crc kubenswrapper[4910]: I0226 22:22:11.821885 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:22:11 crc kubenswrapper[4910]: I0226 22:22:11.900181 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rwb\" (UniqueName: \"kubernetes.io/projected/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-kube-api-access-z5rwb\") pod \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " Feb 26 22:22:11 crc kubenswrapper[4910]: I0226 22:22:11.900264 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-inventory\") pod \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " Feb 26 22:22:11 crc kubenswrapper[4910]: I0226 22:22:11.900420 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-ssh-key-openstack-edpm-ipam\") pod \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " Feb 26 22:22:11 crc kubenswrapper[4910]: I0226 22:22:11.900533 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-repo-setup-combined-ca-bundle\") pod \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\" (UID: \"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a\") " Feb 26 22:22:11 crc kubenswrapper[4910]: I0226 22:22:11.905757 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "34a46ff4-b5ae-4012-bbec-7601cd6a6a5a" (UID: "34a46ff4-b5ae-4012-bbec-7601cd6a6a5a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:22:11 crc kubenswrapper[4910]: I0226 22:22:11.906439 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-kube-api-access-z5rwb" (OuterVolumeSpecName: "kube-api-access-z5rwb") pod "34a46ff4-b5ae-4012-bbec-7601cd6a6a5a" (UID: "34a46ff4-b5ae-4012-bbec-7601cd6a6a5a"). InnerVolumeSpecName "kube-api-access-z5rwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:22:11 crc kubenswrapper[4910]: I0226 22:22:11.940277 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-inventory" (OuterVolumeSpecName: "inventory") pod "34a46ff4-b5ae-4012-bbec-7601cd6a6a5a" (UID: "34a46ff4-b5ae-4012-bbec-7601cd6a6a5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:22:11 crc kubenswrapper[4910]: I0226 22:22:11.947540 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34a46ff4-b5ae-4012-bbec-7601cd6a6a5a" (UID: "34a46ff4-b5ae-4012-bbec-7601cd6a6a5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.005042 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5rwb\" (UniqueName: \"kubernetes.io/projected/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-kube-api-access-z5rwb\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.005082 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.005097 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.005109 4910 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a46ff4-b5ae-4012-bbec-7601cd6a6a5a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.282308 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" event={"ID":"34a46ff4-b5ae-4012-bbec-7601cd6a6a5a","Type":"ContainerDied","Data":"e4109f2c025f913c693fc3f60edf53bcc0475eda0d24f1055a39c7eabdb18aae"} Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.282367 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4109f2c025f913c693fc3f60edf53bcc0475eda0d24f1055a39c7eabdb18aae" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.282412 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.392733 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf"] Feb 26 22:22:12 crc kubenswrapper[4910]: E0226 22:22:12.393492 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463a94d8-3449-488b-97e1-80044ca36e5a" containerName="oc" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.393524 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="463a94d8-3449-488b-97e1-80044ca36e5a" containerName="oc" Feb 26 22:22:12 crc kubenswrapper[4910]: E0226 22:22:12.393585 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a46ff4-b5ae-4012-bbec-7601cd6a6a5a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.393599 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a46ff4-b5ae-4012-bbec-7601cd6a6a5a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.394016 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a46ff4-b5ae-4012-bbec-7601cd6a6a5a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.394075 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="463a94d8-3449-488b-97e1-80044ca36e5a" containerName="oc" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.395101 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.397091 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.397290 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.398620 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.398991 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.417571 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf"] Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.515801 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7jk\" (UniqueName: \"kubernetes.io/projected/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-kube-api-access-2b7jk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2cgvf\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.516172 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2cgvf\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.516267 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2cgvf\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.618060 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2cgvf\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.618997 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7jk\" (UniqueName: \"kubernetes.io/projected/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-kube-api-access-2b7jk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2cgvf\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.619435 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2cgvf\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.623424 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2cgvf\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.623849 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2cgvf\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.637272 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7jk\" (UniqueName: \"kubernetes.io/projected/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-kube-api-access-2b7jk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2cgvf\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:12 crc kubenswrapper[4910]: I0226 22:22:12.717563 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:13 crc kubenswrapper[4910]: I0226 22:22:13.263183 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf"] Feb 26 22:22:13 crc kubenswrapper[4910]: W0226 22:22:13.270442 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cf7ce56_e96f_47ae_ae77_8e32396de8e4.slice/crio-8d2f68d4b48add3401b28c9b3c665e6fdc2061c869d7b1e758899a102eff4002 WatchSource:0}: Error finding container 8d2f68d4b48add3401b28c9b3c665e6fdc2061c869d7b1e758899a102eff4002: Status 404 returned error can't find the container with id 8d2f68d4b48add3401b28c9b3c665e6fdc2061c869d7b1e758899a102eff4002 Feb 26 22:22:13 crc kubenswrapper[4910]: I0226 22:22:13.295637 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" event={"ID":"0cf7ce56-e96f-47ae-ae77-8e32396de8e4","Type":"ContainerStarted","Data":"8d2f68d4b48add3401b28c9b3c665e6fdc2061c869d7b1e758899a102eff4002"} Feb 26 22:22:14 crc kubenswrapper[4910]: I0226 22:22:14.314499 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" event={"ID":"0cf7ce56-e96f-47ae-ae77-8e32396de8e4","Type":"ContainerStarted","Data":"191bf82d4261cad6c0c474be75cc81e450cc04af83a6a58165d90fd1b7ad389a"} Feb 26 22:22:14 crc kubenswrapper[4910]: I0226 22:22:14.342848 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" podStartSLOduration=1.9215594230000002 podStartE2EDuration="2.342827274s" podCreationTimestamp="2026-02-26 22:22:12 +0000 UTC" firstStartedPulling="2026-02-26 22:22:13.27335377 +0000 UTC m=+1618.352844311" lastFinishedPulling="2026-02-26 22:22:13.694621601 +0000 UTC m=+1618.774112162" observedRunningTime="2026-02-26 22:22:14.334779975 +0000 UTC m=+1619.414270546" watchObservedRunningTime="2026-02-26 22:22:14.342827274 +0000 UTC m=+1619.422317815" Feb 26 22:22:17 crc kubenswrapper[4910]: I0226 22:22:17.354568 4910 generic.go:334] "Generic (PLEG): container finished" podID="0cf7ce56-e96f-47ae-ae77-8e32396de8e4" containerID="191bf82d4261cad6c0c474be75cc81e450cc04af83a6a58165d90fd1b7ad389a" exitCode=0 Feb 26 22:22:17 crc kubenswrapper[4910]: I0226 22:22:17.354637 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" event={"ID":"0cf7ce56-e96f-47ae-ae77-8e32396de8e4","Type":"ContainerDied","Data":"191bf82d4261cad6c0c474be75cc81e450cc04af83a6a58165d90fd1b7ad389a"} Feb 26 22:22:18 crc kubenswrapper[4910]: I0226 22:22:18.941777 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.069336 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-ssh-key-openstack-edpm-ipam\") pod \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.069556 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-inventory\") pod \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.069734 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b7jk\" (UniqueName: \"kubernetes.io/projected/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-kube-api-access-2b7jk\") pod \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\" (UID: \"0cf7ce56-e96f-47ae-ae77-8e32396de8e4\") " Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.075959 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-kube-api-access-2b7jk" (OuterVolumeSpecName: "kube-api-access-2b7jk") pod "0cf7ce56-e96f-47ae-ae77-8e32396de8e4" (UID: "0cf7ce56-e96f-47ae-ae77-8e32396de8e4"). InnerVolumeSpecName "kube-api-access-2b7jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.113449 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-inventory" (OuterVolumeSpecName: "inventory") pod "0cf7ce56-e96f-47ae-ae77-8e32396de8e4" (UID: "0cf7ce56-e96f-47ae-ae77-8e32396de8e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.122511 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0cf7ce56-e96f-47ae-ae77-8e32396de8e4" (UID: "0cf7ce56-e96f-47ae-ae77-8e32396de8e4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.172264 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.172300 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b7jk\" (UniqueName: \"kubernetes.io/projected/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-kube-api-access-2b7jk\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.172310 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf7ce56-e96f-47ae-ae77-8e32396de8e4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.383825 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" event={"ID":"0cf7ce56-e96f-47ae-ae77-8e32396de8e4","Type":"ContainerDied","Data":"8d2f68d4b48add3401b28c9b3c665e6fdc2061c869d7b1e758899a102eff4002"} Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.383880 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2f68d4b48add3401b28c9b3c665e6fdc2061c869d7b1e758899a102eff4002" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.383908 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2cgvf" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.475188 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m"] Feb 26 22:22:19 crc kubenswrapper[4910]: E0226 22:22:19.477921 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf7ce56-e96f-47ae-ae77-8e32396de8e4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.477943 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf7ce56-e96f-47ae-ae77-8e32396de8e4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.478224 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf7ce56-e96f-47ae-ae77-8e32396de8e4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.478991 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.480881 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.480943 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.481979 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.484132 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.507953 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m"] Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.517698 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.561797 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.581575 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4m6s\" (UniqueName: \"kubernetes.io/projected/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-kube-api-access-b4m6s\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.581754 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.581782 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.581815 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.683688 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4m6s\" (UniqueName: \"kubernetes.io/projected/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-kube-api-access-b4m6s\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.683801 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.683827 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.683856 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.688502 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.689448 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.689907 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.701122 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4m6s\" (UniqueName: \"kubernetes.io/projected/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-kube-api-access-b4m6s\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.757733 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k2qkq"] Feb 26 22:22:19 crc kubenswrapper[4910]: I0226 22:22:19.797525 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:22:20 crc kubenswrapper[4910]: W0226 22:22:20.361744 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbad4d26_7d58_4969_a25f_6b67dc18b9e9.slice/crio-9f6f023592c42a20aeec84302c6e8c9ba48b334e7a6b73ee6d98ea38b5f89aff WatchSource:0}: Error finding container 9f6f023592c42a20aeec84302c6e8c9ba48b334e7a6b73ee6d98ea38b5f89aff: Status 404 returned error can't find the container with id 9f6f023592c42a20aeec84302c6e8c9ba48b334e7a6b73ee6d98ea38b5f89aff Feb 26 22:22:20 crc kubenswrapper[4910]: I0226 22:22:20.362003 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m"] Feb 26 22:22:20 crc kubenswrapper[4910]: I0226 22:22:20.399947 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" event={"ID":"dbad4d26-7d58-4969-a25f-6b67dc18b9e9","Type":"ContainerStarted","Data":"9f6f023592c42a20aeec84302c6e8c9ba48b334e7a6b73ee6d98ea38b5f89aff"} Feb 26 22:22:21 crc kubenswrapper[4910]: I0226 22:22:21.412268 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k2qkq" podUID="6e054fd7-034c-43f0-a193-712292073151" containerName="registry-server" containerID="cri-o://6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed" gracePeriod=2 Feb 26 22:22:21 crc kubenswrapper[4910]: I0226 22:22:21.416843 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" event={"ID":"dbad4d26-7d58-4969-a25f-6b67dc18b9e9","Type":"ContainerStarted","Data":"b50806cd2c9dc7dc2b32a261d36a9fe902673f3d810c7ee2a6ac8d5b1c178449"} Feb 26 22:22:21 crc kubenswrapper[4910]: I0226 22:22:21.454633 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" podStartSLOduration=2.043984443 podStartE2EDuration="2.454603883s" podCreationTimestamp="2026-02-26 22:22:19 +0000 UTC" firstStartedPulling="2026-02-26 22:22:20.366429343 +0000 UTC m=+1625.445919884" lastFinishedPulling="2026-02-26 22:22:20.777048773 +0000 UTC m=+1625.856539324" observedRunningTime="2026-02-26 22:22:21.439253326 +0000 UTC m=+1626.518743877" watchObservedRunningTime="2026-02-26 22:22:21.454603883 +0000 UTC m=+1626.534094434" Feb 26 22:22:21 crc kubenswrapper[4910]: I0226 22:22:21.961452 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.058704 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-utilities\") pod \"6e054fd7-034c-43f0-a193-712292073151\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.058792 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-catalog-content\") pod \"6e054fd7-034c-43f0-a193-712292073151\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.058970 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghvqb\" (UniqueName: \"kubernetes.io/projected/6e054fd7-034c-43f0-a193-712292073151-kube-api-access-ghvqb\") pod \"6e054fd7-034c-43f0-a193-712292073151\" (UID: \"6e054fd7-034c-43f0-a193-712292073151\") " Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.059931 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-utilities" (OuterVolumeSpecName: "utilities") pod "6e054fd7-034c-43f0-a193-712292073151" (UID: "6e054fd7-034c-43f0-a193-712292073151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.064578 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e054fd7-034c-43f0-a193-712292073151-kube-api-access-ghvqb" (OuterVolumeSpecName: "kube-api-access-ghvqb") pod "6e054fd7-034c-43f0-a193-712292073151" (UID: "6e054fd7-034c-43f0-a193-712292073151"). InnerVolumeSpecName "kube-api-access-ghvqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.111354 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e054fd7-034c-43f0-a193-712292073151" (UID: "6e054fd7-034c-43f0-a193-712292073151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.161741 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghvqb\" (UniqueName: \"kubernetes.io/projected/6e054fd7-034c-43f0-a193-712292073151-kube-api-access-ghvqb\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.161791 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.161810 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e054fd7-034c-43f0-a193-712292073151-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.430519 4910 generic.go:334] "Generic (PLEG): container finished" podID="6e054fd7-034c-43f0-a193-712292073151" containerID="6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed" exitCode=0 Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.430631 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2qkq" event={"ID":"6e054fd7-034c-43f0-a193-712292073151","Type":"ContainerDied","Data":"6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed"} Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.430695 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2qkq" event={"ID":"6e054fd7-034c-43f0-a193-712292073151","Type":"ContainerDied","Data":"e0ebefbf8e00806875a6ba4b7d981f18b4ad96d019bb0020d7e7fc21fd8a0759"} Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.430713 4910 scope.go:117] "RemoveContainer" containerID="6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.433542 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2qkq" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.488950 4910 scope.go:117] "RemoveContainer" containerID="9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.497888 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k2qkq"] Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.510919 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k2qkq"] Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.527940 4910 scope.go:117] "RemoveContainer" containerID="e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.595940 4910 scope.go:117] "RemoveContainer" containerID="6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed" Feb 26 22:22:22 crc kubenswrapper[4910]: E0226 22:22:22.596447 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed\": container with ID starting with 6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed not found: ID does not exist" containerID="6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.596490 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed"} err="failed to get container status \"6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed\": rpc error: code = NotFound desc = could not find container \"6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed\": container with ID starting with 6c0415c2bf13e7dd15506bea76504a589b9fa9a5977030ebed6a91fde80f8bed not found: ID does not exist" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.596517 4910 scope.go:117] "RemoveContainer" containerID="9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865" Feb 26 22:22:22 crc kubenswrapper[4910]: E0226 22:22:22.596738 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865\": container with ID starting with 9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865 not found: ID does not exist" containerID="9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.596769 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865"} err="failed to get container status \"9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865\": rpc error: code = NotFound desc = could not find container \"9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865\": container with ID starting with 9afa8fc30f13de52a60b51e4e4bff083b81560bf147e83f1c641af1286efb865 not found: ID does not exist" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.596788 4910 scope.go:117] "RemoveContainer" containerID="e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32" Feb 26 22:22:22 crc kubenswrapper[4910]: E0226 22:22:22.597008 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32\": container with ID starting with e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32 not found: ID does not exist" containerID="e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32" Feb 26 22:22:22 crc kubenswrapper[4910]: I0226 22:22:22.597037 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32"} err="failed to get container status \"e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32\": rpc error: code = NotFound desc = could not find container \"e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32\": container with ID starting with e2e9ec570b650c461874522dc3d9fcbbe320685fc6e2feb21ff0e610f91c3d32 not found: ID does not exist" Feb 26 22:22:23 crc kubenswrapper[4910]: I0226 22:22:23.920015 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e054fd7-034c-43f0-a193-712292073151" path="/var/lib/kubelet/pods/6e054fd7-034c-43f0-a193-712292073151/volumes" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.021914 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l2qn4"] Feb 26 22:22:25 crc kubenswrapper[4910]: E0226 22:22:25.022999 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e054fd7-034c-43f0-a193-712292073151" containerName="extract-utilities" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.023023 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e054fd7-034c-43f0-a193-712292073151" containerName="extract-utilities" Feb 26 22:22:25 crc kubenswrapper[4910]: E0226 22:22:25.023087 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e054fd7-034c-43f0-a193-712292073151" containerName="registry-server" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.023097 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e054fd7-034c-43f0-a193-712292073151" containerName="registry-server" Feb 26 22:22:25 crc kubenswrapper[4910]: E0226 22:22:25.023142 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e054fd7-034c-43f0-a193-712292073151" containerName="extract-content" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.023150 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e054fd7-034c-43f0-a193-712292073151" containerName="extract-content" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.023925 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e054fd7-034c-43f0-a193-712292073151" containerName="registry-server" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.027572 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.038309 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2qn4"] Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.138454 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjf9\" (UniqueName: \"kubernetes.io/projected/98f06109-326e-4c8c-9fa6-b2e49e593731-kube-api-access-qbjf9\") pod \"redhat-marketplace-l2qn4\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.138563 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-catalog-content\") pod \"redhat-marketplace-l2qn4\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.138899 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-utilities\") pod \"redhat-marketplace-l2qn4\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.240983 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-catalog-content\") pod \"redhat-marketplace-l2qn4\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.241201 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-utilities\") pod \"redhat-marketplace-l2qn4\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.241279 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbjf9\" (UniqueName: \"kubernetes.io/projected/98f06109-326e-4c8c-9fa6-b2e49e593731-kube-api-access-qbjf9\") pod \"redhat-marketplace-l2qn4\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.241973 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-utilities\") pod \"redhat-marketplace-l2qn4\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.242059 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-catalog-content\") pod \"redhat-marketplace-l2qn4\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.263051 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbjf9\" (UniqueName: \"kubernetes.io/projected/98f06109-326e-4c8c-9fa6-b2e49e593731-kube-api-access-qbjf9\") pod \"redhat-marketplace-l2qn4\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.349363 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:25 crc kubenswrapper[4910]: I0226 22:22:25.815969 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2qn4"] Feb 26 22:22:25 crc kubenswrapper[4910]: W0226 22:22:25.819041 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f06109_326e_4c8c_9fa6_b2e49e593731.slice/crio-96bfc8ee31aea687bcaccd2166b78989ebbc32dd107c95920faf934e0347d0db WatchSource:0}: Error finding container 96bfc8ee31aea687bcaccd2166b78989ebbc32dd107c95920faf934e0347d0db: Status 404 returned error can't find the container with id 96bfc8ee31aea687bcaccd2166b78989ebbc32dd107c95920faf934e0347d0db Feb 26 22:22:26 crc kubenswrapper[4910]: I0226 22:22:26.490099 4910 generic.go:334] "Generic (PLEG): container finished" podID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerID="9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b" exitCode=0 Feb 26 22:22:26 crc kubenswrapper[4910]: I0226 22:22:26.490304 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2qn4" event={"ID":"98f06109-326e-4c8c-9fa6-b2e49e593731","Type":"ContainerDied","Data":"9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b"} Feb 26 22:22:26 crc kubenswrapper[4910]: I0226 22:22:26.491643 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2qn4" event={"ID":"98f06109-326e-4c8c-9fa6-b2e49e593731","Type":"ContainerStarted","Data":"96bfc8ee31aea687bcaccd2166b78989ebbc32dd107c95920faf934e0347d0db"} Feb 26 22:22:28 crc kubenswrapper[4910]: I0226 22:22:28.521498 4910 generic.go:334] "Generic (PLEG): container finished" podID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerID="eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4" exitCode=0 Feb 26 22:22:28 crc kubenswrapper[4910]: I0226 22:22:28.521590 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2qn4" event={"ID":"98f06109-326e-4c8c-9fa6-b2e49e593731","Type":"ContainerDied","Data":"eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4"} Feb 26 22:22:29 crc kubenswrapper[4910]: I0226 22:22:29.533945 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2qn4" event={"ID":"98f06109-326e-4c8c-9fa6-b2e49e593731","Type":"ContainerStarted","Data":"7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84"} Feb 26 22:22:29 crc kubenswrapper[4910]: I0226 22:22:29.559228 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l2qn4" podStartSLOduration=3.157038741 podStartE2EDuration="5.559210975s" podCreationTimestamp="2026-02-26 22:22:24 +0000 UTC" firstStartedPulling="2026-02-26 22:22:26.493213824 +0000 UTC m=+1631.572704365" lastFinishedPulling="2026-02-26 22:22:28.895386068 +0000 UTC m=+1633.974876599" observedRunningTime="2026-02-26 22:22:29.554683642 +0000 UTC m=+1634.634174193" watchObservedRunningTime="2026-02-26 22:22:29.559210975 +0000 UTC m=+1634.638701516" Feb 26 22:22:30 crc kubenswrapper[4910]: I0226 22:22:30.936644 4910 scope.go:117] "RemoveContainer" containerID="a7be97dc228f66ff13403d63e2d6838104a25b05e8ad17cff0b27145d2968f62" Feb 26 22:22:30 crc kubenswrapper[4910]: I0226 22:22:30.976431 4910 scope.go:117] "RemoveContainer" containerID="87000a89b959093e720699295fffbff1d763fe99a0f9c28ab10c20d241ef10b9" Feb 26 22:22:35 crc kubenswrapper[4910]: I0226 22:22:35.350495 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:35 crc kubenswrapper[4910]: I0226 22:22:35.351118 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:35 crc kubenswrapper[4910]: I0226 22:22:35.440957 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:35 crc kubenswrapper[4910]: I0226 22:22:35.687888 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:35 crc kubenswrapper[4910]: I0226 22:22:35.755557 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2qn4"] Feb 26 22:22:37 crc kubenswrapper[4910]: I0226 22:22:37.633415 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l2qn4" podUID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerName="registry-server" containerID="cri-o://7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84" gracePeriod=2 Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.286122 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.360534 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-utilities\") pod \"98f06109-326e-4c8c-9fa6-b2e49e593731\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.360598 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-catalog-content\") pod \"98f06109-326e-4c8c-9fa6-b2e49e593731\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.360697 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbjf9\" (UniqueName: \"kubernetes.io/projected/98f06109-326e-4c8c-9fa6-b2e49e593731-kube-api-access-qbjf9\") pod \"98f06109-326e-4c8c-9fa6-b2e49e593731\" (UID: \"98f06109-326e-4c8c-9fa6-b2e49e593731\") " Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.361444 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-utilities" (OuterVolumeSpecName: "utilities") pod "98f06109-326e-4c8c-9fa6-b2e49e593731" (UID: "98f06109-326e-4c8c-9fa6-b2e49e593731"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.365984 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f06109-326e-4c8c-9fa6-b2e49e593731-kube-api-access-qbjf9" (OuterVolumeSpecName: "kube-api-access-qbjf9") pod "98f06109-326e-4c8c-9fa6-b2e49e593731" (UID: "98f06109-326e-4c8c-9fa6-b2e49e593731"). InnerVolumeSpecName "kube-api-access-qbjf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.383119 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98f06109-326e-4c8c-9fa6-b2e49e593731" (UID: "98f06109-326e-4c8c-9fa6-b2e49e593731"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.463433 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbjf9\" (UniqueName: \"kubernetes.io/projected/98f06109-326e-4c8c-9fa6-b2e49e593731-kube-api-access-qbjf9\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.463465 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.463476 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f06109-326e-4c8c-9fa6-b2e49e593731-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.645858 4910 generic.go:334] "Generic (PLEG): container finished" podID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerID="7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84" exitCode=0 Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.645907 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2qn4" event={"ID":"98f06109-326e-4c8c-9fa6-b2e49e593731","Type":"ContainerDied","Data":"7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84"} Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.645939 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2qn4" event={"ID":"98f06109-326e-4c8c-9fa6-b2e49e593731","Type":"ContainerDied","Data":"96bfc8ee31aea687bcaccd2166b78989ebbc32dd107c95920faf934e0347d0db"} Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.645938 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2qn4" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.645958 4910 scope.go:117] "RemoveContainer" containerID="7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.673835 4910 scope.go:117] "RemoveContainer" containerID="eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.695495 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2qn4"] Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.714713 4910 scope.go:117] "RemoveContainer" containerID="9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.715629 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2qn4"] Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.780547 4910 scope.go:117] "RemoveContainer" containerID="7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84" Feb 26 22:22:38 crc kubenswrapper[4910]: E0226 22:22:38.781050 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84\": container with ID starting with 7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84 not found: ID does not exist" containerID="7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.781121 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84"} err="failed to get container status \"7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84\": rpc error: code = NotFound desc = could not find container \"7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84\": container with ID starting with 7ee9f6b0fa800dacd5e6e980481db352fc225701ab7efe683d1c3a0eb1c50b84 not found: ID does not exist" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.781179 4910 scope.go:117] "RemoveContainer" containerID="eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4" Feb 26 22:22:38 crc kubenswrapper[4910]: E0226 22:22:38.785497 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4\": container with ID starting with eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4 not found: ID does not exist" containerID="eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.785538 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4"} err="failed to get container status \"eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4\": rpc error: code = NotFound desc = could not find container \"eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4\": container with ID starting with eddd22afc0b72e24cec4b55e3753a3e826176deda68232d7bc056e6dd9425dc4 not found: ID does not exist" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.785563 4910 scope.go:117] "RemoveContainer" containerID="9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b" Feb 26 22:22:38 crc kubenswrapper[4910]: E0226 22:22:38.785917 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b\": container with ID starting with 9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b not found: ID does not exist" containerID="9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b" Feb 26 22:22:38 crc kubenswrapper[4910]: I0226 22:22:38.785950 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b"} err="failed to get container status \"9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b\": rpc error: code = NotFound desc = could not find container \"9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b\": container with ID starting with 9b5e8c41c8b7589ce4c0c12651cc4fa4e9b0d3f7362984b12456d6978e592d0b not found: ID does not exist" Feb 26 22:22:39 crc kubenswrapper[4910]: I0226 22:22:39.915147 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f06109-326e-4c8c-9fa6-b2e49e593731" path="/var/lib/kubelet/pods/98f06109-326e-4c8c-9fa6-b2e49e593731/volumes" Feb 26 22:23:31 crc kubenswrapper[4910]: I0226 22:23:31.223840 4910 scope.go:117] "RemoveContainer" containerID="bd68f29d52f32a5d4c24e680695a8aebb59a743998602c5a4f288558399a49af" Feb 26 22:23:31 crc kubenswrapper[4910]: I0226 22:23:31.269610 4910 scope.go:117] "RemoveContainer" containerID="c5c8fc95e5d919a36830541cfe3dbe1a28cd76416cc1c384d7175ddc6a0c0653" Feb 26 22:23:31 crc kubenswrapper[4910]: I0226 22:23:31.439605 4910 scope.go:117] "RemoveContainer" containerID="23561c0500be695516e100d5358f0b566e186eaa98c0706a936d2f051fc12b77" Feb 26 22:23:55 crc kubenswrapper[4910]: I0226 22:23:55.727034 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:23:55 crc kubenswrapper[4910]: I0226 22:23:55.727746 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.150593 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535744-47czd"] Feb 26 22:24:00 crc kubenswrapper[4910]: E0226 22:24:00.151684 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerName="registry-server" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.151705 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerName="registry-server" Feb 26 22:24:00 crc kubenswrapper[4910]: E0226 22:24:00.151736 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerName="extract-content" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.151749 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerName="extract-content" Feb 26 22:24:00 crc kubenswrapper[4910]: E0226 22:24:00.151769 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerName="extract-utilities" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.151782 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerName="extract-utilities" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.152127 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f06109-326e-4c8c-9fa6-b2e49e593731" containerName="registry-server" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.153428 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535744-47czd" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.157105 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.157884 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.158115 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.171462 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535744-47czd"] Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.258930 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j5l9\" (UniqueName: \"kubernetes.io/projected/a2cf5166-5c36-4301-80c2-774c31060096-kube-api-access-4j5l9\") pod \"auto-csr-approver-29535744-47czd\" (UID: \"a2cf5166-5c36-4301-80c2-774c31060096\") " pod="openshift-infra/auto-csr-approver-29535744-47czd" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.361263 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j5l9\" (UniqueName: \"kubernetes.io/projected/a2cf5166-5c36-4301-80c2-774c31060096-kube-api-access-4j5l9\") pod \"auto-csr-approver-29535744-47czd\" (UID: \"a2cf5166-5c36-4301-80c2-774c31060096\") " pod="openshift-infra/auto-csr-approver-29535744-47czd" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.389556 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j5l9\" (UniqueName: \"kubernetes.io/projected/a2cf5166-5c36-4301-80c2-774c31060096-kube-api-access-4j5l9\") pod \"auto-csr-approver-29535744-47czd\" (UID: \"a2cf5166-5c36-4301-80c2-774c31060096\") " pod="openshift-infra/auto-csr-approver-29535744-47czd" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.474042 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535744-47czd" Feb 26 22:24:00 crc kubenswrapper[4910]: I0226 22:24:00.978529 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535744-47czd"] Feb 26 22:24:01 crc kubenswrapper[4910]: I0226 22:24:01.626491 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535744-47czd" event={"ID":"a2cf5166-5c36-4301-80c2-774c31060096","Type":"ContainerStarted","Data":"bd073b06461358e4fa5ef49987b0f70b5262ff69e6470407ef19f8d586f24358"} Feb 26 22:24:02 crc kubenswrapper[4910]: I0226 22:24:02.642510 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535744-47czd" event={"ID":"a2cf5166-5c36-4301-80c2-774c31060096","Type":"ContainerStarted","Data":"0f7481aa2295319bdc4e7f4799721d0afb5711dc6afdd102e63ed4f6e9c0471b"} Feb 26 22:24:02 crc kubenswrapper[4910]: I0226 22:24:02.669743 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535744-47czd" podStartSLOduration=1.71820487 podStartE2EDuration="2.6697161s" podCreationTimestamp="2026-02-26 22:24:00 +0000 UTC" firstStartedPulling="2026-02-26 22:24:00.989737318 +0000 UTC m=+1726.069227859" lastFinishedPulling="2026-02-26 22:24:01.941248538 +0000 UTC m=+1727.020739089" observedRunningTime="2026-02-26 22:24:02.665780143 +0000 UTC m=+1727.745270694" watchObservedRunningTime="2026-02-26 22:24:02.6697161 +0000 UTC m=+1727.749206681" Feb 26 22:24:03 crc kubenswrapper[4910]: I0226 22:24:03.657194 4910 generic.go:334] "Generic (PLEG): container finished" podID="a2cf5166-5c36-4301-80c2-774c31060096" containerID="0f7481aa2295319bdc4e7f4799721d0afb5711dc6afdd102e63ed4f6e9c0471b" exitCode=0 Feb 26 22:24:03 crc kubenswrapper[4910]: I0226 22:24:03.657263 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535744-47czd" event={"ID":"a2cf5166-5c36-4301-80c2-774c31060096","Type":"ContainerDied","Data":"0f7481aa2295319bdc4e7f4799721d0afb5711dc6afdd102e63ed4f6e9c0471b"} Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.240615 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535744-47czd" Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.386613 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j5l9\" (UniqueName: \"kubernetes.io/projected/a2cf5166-5c36-4301-80c2-774c31060096-kube-api-access-4j5l9\") pod \"a2cf5166-5c36-4301-80c2-774c31060096\" (UID: \"a2cf5166-5c36-4301-80c2-774c31060096\") " Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.394673 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cf5166-5c36-4301-80c2-774c31060096-kube-api-access-4j5l9" (OuterVolumeSpecName: "kube-api-access-4j5l9") pod "a2cf5166-5c36-4301-80c2-774c31060096" (UID: "a2cf5166-5c36-4301-80c2-774c31060096"). InnerVolumeSpecName "kube-api-access-4j5l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.490900 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j5l9\" (UniqueName: \"kubernetes.io/projected/a2cf5166-5c36-4301-80c2-774c31060096-kube-api-access-4j5l9\") on node \"crc\" DevicePath \"\"" Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.689684 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535744-47czd" event={"ID":"a2cf5166-5c36-4301-80c2-774c31060096","Type":"ContainerDied","Data":"bd073b06461358e4fa5ef49987b0f70b5262ff69e6470407ef19f8d586f24358"} Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.689760 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd073b06461358e4fa5ef49987b0f70b5262ff69e6470407ef19f8d586f24358" Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.689872 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535744-47czd" Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.778925 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535738-725f8"] Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.791116 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535738-725f8"] Feb 26 22:24:05 crc kubenswrapper[4910]: I0226 22:24:05.916515 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="427eef92-e6a1-48a4-99e1-98a78c269555" path="/var/lib/kubelet/pods/427eef92-e6a1-48a4-99e1-98a78c269555/volumes" Feb 26 22:24:25 crc kubenswrapper[4910]: I0226 22:24:25.727755 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:24:25 crc kubenswrapper[4910]: I0226 22:24:25.728368 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:24:31 crc kubenswrapper[4910]: I0226 22:24:31.664594 4910 scope.go:117] "RemoveContainer" containerID="58fb336d56abeada7aa3988e36e57a60f87372e8fc0fbefdedc16e65e5bd2be9" Feb 26 22:24:31 crc kubenswrapper[4910]: I0226 22:24:31.735533 4910 scope.go:117] "RemoveContainer" containerID="f6d077132a0e46fb64a9ec3b4e3d969c97022d5ab74f6cc64d48f0817755c4b1" Feb 26 22:24:31 crc kubenswrapper[4910]: I0226 22:24:31.773033 4910 scope.go:117] "RemoveContainer" containerID="0430a4d83fcac8ffb1e4618fe687cba42e610f121aaea2f823b9ccda1a479579" Feb 26 22:24:31 crc kubenswrapper[4910]: I0226 22:24:31.823548 4910 scope.go:117] "RemoveContainer" containerID="fbfb5c31dbc628fe128d7b1dc0d8026139c0d7c3f1dbb5f9339215a8a3f15e0f" Feb 26 22:24:31 crc kubenswrapper[4910]: I0226 22:24:31.882406 4910 scope.go:117] "RemoveContainer" containerID="a9faa3beeb89e54f6198f575b430ba30d8029de2e9e2367d3dfc1f745b400cfc" Feb 26 22:24:31 crc kubenswrapper[4910]: I0226 22:24:31.907616 4910 scope.go:117] "RemoveContainer" containerID="998266ec0d2d40dabae2acdf29d95fabda8d4512cc367b4fdc9eee06bdbd4aee" Feb 26 22:24:31 crc kubenswrapper[4910]: I0226 22:24:31.928580 4910 scope.go:117] "RemoveContainer" containerID="3cacdacff8e1199ddf5f9d3d2c14aea84c9ae3197afd95e78c2492f2cc925c05" Feb 26 22:24:55 crc kubenswrapper[4910]: I0226 22:24:55.727683 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:24:55 crc kubenswrapper[4910]: I0226 22:24:55.728332 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:24:55 crc kubenswrapper[4910]: I0226 22:24:55.728388 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:24:55 crc kubenswrapper[4910]: I0226 22:24:55.729308 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:24:55 crc kubenswrapper[4910]: I0226 22:24:55.729400 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" gracePeriod=600 Feb 26 22:24:55 crc kubenswrapper[4910]: E0226 22:24:55.850682 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:24:56 crc kubenswrapper[4910]: I0226 22:24:56.303307 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" exitCode=0 Feb 26 22:24:56 crc kubenswrapper[4910]: I0226 22:24:56.303405 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6"} Feb 26 22:24:56 crc kubenswrapper[4910]: I0226 22:24:56.303780 4910 scope.go:117] "RemoveContainer" containerID="c9e7e9afe0afc45cb3107605182e65bb0e883988c1f2cfa35e317e7033cca07c" Feb 26 22:24:56 crc kubenswrapper[4910]: I0226 22:24:56.304815 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:24:56 crc kubenswrapper[4910]: E0226 22:24:56.305369 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:25:10 crc kubenswrapper[4910]: I0226 22:25:10.902088 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:25:10 crc kubenswrapper[4910]: E0226 22:25:10.903067 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:25:25 crc kubenswrapper[4910]: I0226 22:25:25.648739 4910 generic.go:334] "Generic (PLEG): container finished" podID="dbad4d26-7d58-4969-a25f-6b67dc18b9e9" containerID="b50806cd2c9dc7dc2b32a261d36a9fe902673f3d810c7ee2a6ac8d5b1c178449" exitCode=0 Feb 26 22:25:25 crc kubenswrapper[4910]: I0226 22:25:25.648826 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" event={"ID":"dbad4d26-7d58-4969-a25f-6b67dc18b9e9","Type":"ContainerDied","Data":"b50806cd2c9dc7dc2b32a261d36a9fe902673f3d810c7ee2a6ac8d5b1c178449"} Feb 26 22:25:25 crc kubenswrapper[4910]: I0226 22:25:25.911369 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:25:25 crc kubenswrapper[4910]: E0226 22:25:25.912189 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.188188 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.287988 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-inventory\") pod \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.288270 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-ssh-key-openstack-edpm-ipam\") pod \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.288332 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4m6s\" (UniqueName: \"kubernetes.io/projected/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-kube-api-access-b4m6s\") pod \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.288380 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-bootstrap-combined-ca-bundle\") pod \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\" (UID: \"dbad4d26-7d58-4969-a25f-6b67dc18b9e9\") " Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.298423 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "dbad4d26-7d58-4969-a25f-6b67dc18b9e9" (UID: "dbad4d26-7d58-4969-a25f-6b67dc18b9e9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.298528 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-kube-api-access-b4m6s" (OuterVolumeSpecName: "kube-api-access-b4m6s") pod "dbad4d26-7d58-4969-a25f-6b67dc18b9e9" (UID: "dbad4d26-7d58-4969-a25f-6b67dc18b9e9"). InnerVolumeSpecName "kube-api-access-b4m6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.325468 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-inventory" (OuterVolumeSpecName: "inventory") pod "dbad4d26-7d58-4969-a25f-6b67dc18b9e9" (UID: "dbad4d26-7d58-4969-a25f-6b67dc18b9e9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.336609 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbad4d26-7d58-4969-a25f-6b67dc18b9e9" (UID: "dbad4d26-7d58-4969-a25f-6b67dc18b9e9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.398219 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.398269 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4m6s\" (UniqueName: \"kubernetes.io/projected/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-kube-api-access-b4m6s\") on node \"crc\" DevicePath \"\"" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.398283 4910 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.398296 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbad4d26-7d58-4969-a25f-6b67dc18b9e9-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.671570 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" event={"ID":"dbad4d26-7d58-4969-a25f-6b67dc18b9e9","Type":"ContainerDied","Data":"9f6f023592c42a20aeec84302c6e8c9ba48b334e7a6b73ee6d98ea38b5f89aff"} Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.671615 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.671618 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f6f023592c42a20aeec84302c6e8c9ba48b334e7a6b73ee6d98ea38b5f89aff" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.771219 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt"] Feb 26 22:25:27 crc kubenswrapper[4910]: E0226 22:25:27.771612 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbad4d26-7d58-4969-a25f-6b67dc18b9e9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.771627 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbad4d26-7d58-4969-a25f-6b67dc18b9e9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 22:25:27 crc kubenswrapper[4910]: E0226 22:25:27.771669 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cf5166-5c36-4301-80c2-774c31060096" containerName="oc" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.771676 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cf5166-5c36-4301-80c2-774c31060096" containerName="oc" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.771852 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cf5166-5c36-4301-80c2-774c31060096" containerName="oc" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.771875 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbad4d26-7d58-4969-a25f-6b67dc18b9e9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.772591 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.774252 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.774259 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.774431 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.778826 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.786703 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt"] Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.907948 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.908043 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz4dx\" (UniqueName: \"kubernetes.io/projected/90647bce-161d-4a56-86bc-662a69916664-kube-api-access-kz4dx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:27 crc kubenswrapper[4910]: I0226 22:25:27.908118 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:28 crc kubenswrapper[4910]: I0226 22:25:28.010538 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:28 crc kubenswrapper[4910]: I0226 22:25:28.010648 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz4dx\" (UniqueName: \"kubernetes.io/projected/90647bce-161d-4a56-86bc-662a69916664-kube-api-access-kz4dx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:28 crc kubenswrapper[4910]: I0226 22:25:28.010749 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:28 crc kubenswrapper[4910]: I0226 22:25:28.015270 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:28 crc kubenswrapper[4910]: I0226 22:25:28.015829 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:28 crc kubenswrapper[4910]: I0226 22:25:28.030234 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz4dx\" (UniqueName: \"kubernetes.io/projected/90647bce-161d-4a56-86bc-662a69916664-kube-api-access-kz4dx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:28 crc kubenswrapper[4910]: I0226 22:25:28.088923 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:25:28 crc kubenswrapper[4910]: I0226 22:25:28.694617 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt"] Feb 26 22:25:29 crc kubenswrapper[4910]: I0226 22:25:29.698640 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" event={"ID":"90647bce-161d-4a56-86bc-662a69916664","Type":"ContainerStarted","Data":"5c1aad427c4c85b820b3fbb840f910d55a4d9cc28f66a5337829a6ac6a40502e"} Feb 26 22:25:29 crc kubenswrapper[4910]: I0226 22:25:29.699017 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" event={"ID":"90647bce-161d-4a56-86bc-662a69916664","Type":"ContainerStarted","Data":"66e815d0be8238201af19271a2d8097037493145661004f3ac1b054c92dd4e3d"} Feb 26 22:25:29 crc kubenswrapper[4910]: I0226 22:25:29.721849 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" podStartSLOduration=2.29684714 podStartE2EDuration="2.721831902s" podCreationTimestamp="2026-02-26 22:25:27 +0000 UTC" firstStartedPulling="2026-02-26 22:25:28.700473485 +0000 UTC m=+1813.779964046" lastFinishedPulling="2026-02-26 22:25:29.125458277 +0000 UTC m=+1814.204948808" observedRunningTime="2026-02-26 22:25:29.716363804 +0000 UTC m=+1814.795854385" watchObservedRunningTime="2026-02-26 22:25:29.721831902 +0000 UTC m=+1814.801322443" Feb 26 22:25:32 crc kubenswrapper[4910]: I0226 22:25:32.076319 4910 scope.go:117] "RemoveContainer" containerID="18d69e7098ec56c72b16ab1ed1eb56be02ee031957db05722e7bfea4291dcc62" Feb 26 22:25:36 crc kubenswrapper[4910]: I0226 22:25:36.901175 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:25:36 crc kubenswrapper[4910]: E0226 22:25:36.901923 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:25:51 crc kubenswrapper[4910]: I0226 22:25:51.902211 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:25:51 crc kubenswrapper[4910]: E0226 22:25:51.902998 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.169496 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535746-j488b"] Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.172403 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535746-j488b" Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.175208 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.175392 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.175653 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.197834 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535746-j488b"] Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.292797 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwz6\" (UniqueName: \"kubernetes.io/projected/01704631-8b12-4b09-932f-fd922af31259-kube-api-access-bnwz6\") pod \"auto-csr-approver-29535746-j488b\" (UID: \"01704631-8b12-4b09-932f-fd922af31259\") " pod="openshift-infra/auto-csr-approver-29535746-j488b" Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.395472 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnwz6\" (UniqueName: \"kubernetes.io/projected/01704631-8b12-4b09-932f-fd922af31259-kube-api-access-bnwz6\") pod \"auto-csr-approver-29535746-j488b\" (UID: \"01704631-8b12-4b09-932f-fd922af31259\") " pod="openshift-infra/auto-csr-approver-29535746-j488b" Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.427941 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnwz6\" (UniqueName: \"kubernetes.io/projected/01704631-8b12-4b09-932f-fd922af31259-kube-api-access-bnwz6\") pod \"auto-csr-approver-29535746-j488b\" (UID: \"01704631-8b12-4b09-932f-fd922af31259\") " pod="openshift-infra/auto-csr-approver-29535746-j488b" Feb 26 22:26:00 crc kubenswrapper[4910]: I0226 22:26:00.495520 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535746-j488b" Feb 26 22:26:01 crc kubenswrapper[4910]: I0226 22:26:01.005981 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535746-j488b"] Feb 26 22:26:01 crc kubenswrapper[4910]: I0226 22:26:01.090981 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535746-j488b" event={"ID":"01704631-8b12-4b09-932f-fd922af31259","Type":"ContainerStarted","Data":"c07b8ec8ed2835aa285580e4dee85c50ff808d5469792abc7d39d0b504e34e32"} Feb 26 22:26:03 crc kubenswrapper[4910]: I0226 22:26:03.132698 4910 generic.go:334] "Generic (PLEG): container finished" podID="01704631-8b12-4b09-932f-fd922af31259" containerID="6fe2febede6855a498f91797447efdd4452ed4eaa7b76963b937feccfe0189cc" exitCode=0 Feb 26 22:26:03 crc kubenswrapper[4910]: I0226 22:26:03.132771 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535746-j488b" event={"ID":"01704631-8b12-4b09-932f-fd922af31259","Type":"ContainerDied","Data":"6fe2febede6855a498f91797447efdd4452ed4eaa7b76963b937feccfe0189cc"} Feb 26 22:26:04 crc kubenswrapper[4910]: I0226 22:26:04.623351 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535746-j488b" Feb 26 22:26:04 crc kubenswrapper[4910]: I0226 22:26:04.707610 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnwz6\" (UniqueName: \"kubernetes.io/projected/01704631-8b12-4b09-932f-fd922af31259-kube-api-access-bnwz6\") pod \"01704631-8b12-4b09-932f-fd922af31259\" (UID: \"01704631-8b12-4b09-932f-fd922af31259\") " Feb 26 22:26:04 crc kubenswrapper[4910]: I0226 22:26:04.714570 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01704631-8b12-4b09-932f-fd922af31259-kube-api-access-bnwz6" (OuterVolumeSpecName: "kube-api-access-bnwz6") pod "01704631-8b12-4b09-932f-fd922af31259" (UID: "01704631-8b12-4b09-932f-fd922af31259"). InnerVolumeSpecName "kube-api-access-bnwz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:26:04 crc kubenswrapper[4910]: I0226 22:26:04.810859 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnwz6\" (UniqueName: \"kubernetes.io/projected/01704631-8b12-4b09-932f-fd922af31259-kube-api-access-bnwz6\") on node \"crc\" DevicePath \"\"" Feb 26 22:26:05 crc kubenswrapper[4910]: I0226 22:26:05.153472 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535746-j488b" event={"ID":"01704631-8b12-4b09-932f-fd922af31259","Type":"ContainerDied","Data":"c07b8ec8ed2835aa285580e4dee85c50ff808d5469792abc7d39d0b504e34e32"} Feb 26 22:26:05 crc kubenswrapper[4910]: I0226 22:26:05.153512 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c07b8ec8ed2835aa285580e4dee85c50ff808d5469792abc7d39d0b504e34e32" Feb 26 22:26:05 crc kubenswrapper[4910]: I0226 22:26:05.153534 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535746-j488b" Feb 26 22:26:05 crc kubenswrapper[4910]: I0226 22:26:05.749105 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535740-7mkh8"] Feb 26 22:26:05 crc kubenswrapper[4910]: I0226 22:26:05.770967 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535740-7mkh8"] Feb 26 22:26:05 crc kubenswrapper[4910]: I0226 22:26:05.909565 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:26:05 crc kubenswrapper[4910]: E0226 22:26:05.909954 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:26:05 crc kubenswrapper[4910]: I0226 22:26:05.913791 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a76a37-081d-4322-afc4-a3ba75ebfabe" path="/var/lib/kubelet/pods/01a76a37-081d-4322-afc4-a3ba75ebfabe/volumes" Feb 26 22:26:08 crc kubenswrapper[4910]: I0226 22:26:08.057707 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xztls"] Feb 26 22:26:08 crc kubenswrapper[4910]: I0226 22:26:08.073191 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xztls"] Feb 26 22:26:09 crc kubenswrapper[4910]: I0226 22:26:09.919468 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e4d9b8-abdb-4f3c-8e33-334917a86288" path="/var/lib/kubelet/pods/d3e4d9b8-abdb-4f3c-8e33-334917a86288/volumes" Feb 26 22:26:10 crc kubenswrapper[4910]: I0226 22:26:10.047950 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bf28-account-create-update-8q7x7"] Feb 26 22:26:10 crc kubenswrapper[4910]: I0226 22:26:10.062450 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bf28-account-create-update-8q7x7"] Feb 26 22:26:11 crc kubenswrapper[4910]: I0226 22:26:11.925671 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ca2c64-13ce-46ee-be2e-a54d05e5a626" path="/var/lib/kubelet/pods/32ca2c64-13ce-46ee-be2e-a54d05e5a626/volumes" Feb 26 22:26:12 crc kubenswrapper[4910]: I0226 22:26:12.042478 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-qt2rz"] Feb 26 22:26:12 crc kubenswrapper[4910]: I0226 22:26:12.051841 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-qt2rz"] Feb 26 22:26:12 crc kubenswrapper[4910]: I0226 22:26:12.060926 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9792f"] Feb 26 22:26:12 crc kubenswrapper[4910]: I0226 22:26:12.070440 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-78c5-account-create-update-cxn9r"] Feb 26 22:26:12 crc kubenswrapper[4910]: I0226 22:26:12.080796 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6328-account-create-update-r69jh"] Feb 26 22:26:12 crc kubenswrapper[4910]: I0226 22:26:12.092279 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9792f"] Feb 26 22:26:12 crc kubenswrapper[4910]: I0226 22:26:12.105666 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6328-account-create-update-r69jh"] Feb 26 22:26:12 crc kubenswrapper[4910]: I0226 22:26:12.117967 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-78c5-account-create-update-cxn9r"] Feb 26 22:26:13 crc kubenswrapper[4910]: I0226 22:26:13.928397 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a47034-85d8-4a6d-983b-cf69ea88a122" path="/var/lib/kubelet/pods/06a47034-85d8-4a6d-983b-cf69ea88a122/volumes" Feb 26 22:26:13 crc kubenswrapper[4910]: I0226 22:26:13.929610 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec61180-421b-4df0-8cd0-5cc207b4a179" path="/var/lib/kubelet/pods/0ec61180-421b-4df0-8cd0-5cc207b4a179/volumes" Feb 26 22:26:13 crc kubenswrapper[4910]: I0226 22:26:13.930799 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a08a89c-9d68-445e-bd75-be72757906a6" path="/var/lib/kubelet/pods/2a08a89c-9d68-445e-bd75-be72757906a6/volumes" Feb 26 22:26:13 crc kubenswrapper[4910]: I0226 22:26:13.931952 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0775bf3-efcb-489c-acfb-5bd1ee95391a" path="/var/lib/kubelet/pods/e0775bf3-efcb-489c-acfb-5bd1ee95391a/volumes" Feb 26 22:26:20 crc kubenswrapper[4910]: I0226 22:26:20.903089 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:26:20 crc kubenswrapper[4910]: E0226 22:26:20.904277 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:26:32 crc kubenswrapper[4910]: I0226 22:26:32.170100 4910 scope.go:117] "RemoveContainer" containerID="5eec933bf249a6e0f0fc459720399f2243e9c87f01449546b32af204e08cef34" Feb 26 22:26:32 crc kubenswrapper[4910]: I0226 22:26:32.201497 4910 scope.go:117] "RemoveContainer" containerID="8c551a8eaa71eada9aceefbdce018e9a3105b37d91943ef1c1fbdb3fba511f8a" Feb 26 22:26:32 crc kubenswrapper[4910]: I0226 22:26:32.266905 4910 scope.go:117] "RemoveContainer" containerID="4cfc3fec06de1a50a8b91b6b283814165f4cd05033370fbc806b60f1e586810e" Feb 26 22:26:32 crc kubenswrapper[4910]: I0226 22:26:32.311061 4910 scope.go:117] "RemoveContainer" containerID="5e23c150362cd9486c9c8fcf05005bcc3c8f2600335c5f082bd1598985a6cfac" Feb 26 22:26:32 crc kubenswrapper[4910]: I0226 22:26:32.434552 4910 scope.go:117] "RemoveContainer" containerID="eca0c21d5af38389244cca16895996204e1b81bf95c177256a87f37fe0e03d5b" Feb 26 22:26:32 crc kubenswrapper[4910]: I0226 22:26:32.461930 4910 scope.go:117] "RemoveContainer" containerID="8179d6c7541fc46696b54766751182e9dc2c4679515d9e7c306c64de7a6471ec" Feb 26 22:26:32 crc kubenswrapper[4910]: I0226 22:26:32.507456 4910 scope.go:117] "RemoveContainer" containerID="f0ca4eed2eed2a85f5e4f1bac0a3304daaa4aa354181440a9bb63b2489348e85" Feb 26 22:26:32 crc kubenswrapper[4910]: I0226 22:26:32.901771 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:26:32 crc kubenswrapper[4910]: E0226 22:26:32.902476 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.078870 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-w8qxv"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.096068 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-w8qxv"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.110760 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-d7gr8"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.121096 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-5c27-account-create-update-xtbjg"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.129587 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dc6mx"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.138289 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-711e-account-create-update-dnbzz"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.147868 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-k7hrg"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.158265 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0f00-account-create-update-6lkbt"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.167795 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-d7gr8"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.176221 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-k7hrg"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.184356 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0f00-account-create-update-6lkbt"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.192669 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-711e-account-create-update-dnbzz"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.200484 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-5c27-account-create-update-xtbjg"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.209931 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dc6mx"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.218604 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-10a3-account-create-update-slfn2"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.226508 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-10a3-account-create-update-slfn2"] Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.925660 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f83ef0b-ac5c-47dd-a763-66b3c7f31391" path="/var/lib/kubelet/pods/3f83ef0b-ac5c-47dd-a763-66b3c7f31391/volumes" Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.926985 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ce7064-53c4-4861-a60a-996a62f24e55" path="/var/lib/kubelet/pods/52ce7064-53c4-4861-a60a-996a62f24e55/volumes" Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.928601 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87aa985f-3f4a-459d-b6be-6291e21c20c8" path="/var/lib/kubelet/pods/87aa985f-3f4a-459d-b6be-6291e21c20c8/volumes" Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.930224 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa07e8fd-4975-4e27-9b98-ec23e75b271d" path="/var/lib/kubelet/pods/aa07e8fd-4975-4e27-9b98-ec23e75b271d/volumes" Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.933512 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad794ede-dbfc-4a7f-80b9-9742f1eaed3a" path="/var/lib/kubelet/pods/ad794ede-dbfc-4a7f-80b9-9742f1eaed3a/volumes" Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.934915 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1301e33-c3d6-405b-8762-41744119af4d" path="/var/lib/kubelet/pods/c1301e33-c3d6-405b-8762-41744119af4d/volumes" Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.935732 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c2599a-4a7d-4a05-9ffc-bab5996a139e" path="/var/lib/kubelet/pods/c9c2599a-4a7d-4a05-9ffc-bab5996a139e/volumes" Feb 26 22:26:41 crc kubenswrapper[4910]: I0226 22:26:41.937203 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda59548-ac28-4988-88a5-f8770ab9c914" path="/var/lib/kubelet/pods/cda59548-ac28-4988-88a5-f8770ab9c914/volumes" Feb 26 22:26:44 crc kubenswrapper[4910]: I0226 22:26:44.044824 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-pfjh4"] Feb 26 22:26:44 crc kubenswrapper[4910]: I0226 22:26:44.055141 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-pfjh4"] Feb 26 22:26:45 crc kubenswrapper[4910]: I0226 22:26:45.929863 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10fff0b-5682-4806-82e0-0d19db3deae4" path="/var/lib/kubelet/pods/a10fff0b-5682-4806-82e0-0d19db3deae4/volumes" Feb 26 22:26:46 crc kubenswrapper[4910]: I0226 22:26:46.047287 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-szq7t"] Feb 26 22:26:46 crc kubenswrapper[4910]: I0226 22:26:46.064778 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-szq7t"] Feb 26 22:26:47 crc kubenswrapper[4910]: I0226 22:26:47.901693 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:26:47 crc kubenswrapper[4910]: E0226 22:26:47.902552 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:26:47 crc kubenswrapper[4910]: I0226 22:26:47.925418 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0230e94e-a757-4c5f-afed-0f4d1e769f7a" path="/var/lib/kubelet/pods/0230e94e-a757-4c5f-afed-0f4d1e769f7a/volumes" Feb 26 22:26:50 crc kubenswrapper[4910]: I0226 22:26:50.045060 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-t6mjz"] Feb 26 22:26:50 crc kubenswrapper[4910]: I0226 22:26:50.057199 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-t6mjz"] Feb 26 22:26:51 crc kubenswrapper[4910]: I0226 22:26:51.930606 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42b4f12-ddac-4a7e-8fcf-6ce66da40085" path="/var/lib/kubelet/pods/e42b4f12-ddac-4a7e-8fcf-6ce66da40085/volumes" Feb 26 22:27:00 crc kubenswrapper[4910]: I0226 22:27:00.902051 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:27:00 crc kubenswrapper[4910]: E0226 22:27:00.903251 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:27:13 crc kubenswrapper[4910]: I0226 22:27:13.056951 4910 generic.go:334] "Generic (PLEG): container finished" podID="90647bce-161d-4a56-86bc-662a69916664" containerID="5c1aad427c4c85b820b3fbb840f910d55a4d9cc28f66a5337829a6ac6a40502e" exitCode=0 Feb 26 22:27:13 crc kubenswrapper[4910]: I0226 22:27:13.057058 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" event={"ID":"90647bce-161d-4a56-86bc-662a69916664","Type":"ContainerDied","Data":"5c1aad427c4c85b820b3fbb840f910d55a4d9cc28f66a5337829a6ac6a40502e"} Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.063771 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6xmvf"] Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.074844 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6xmvf"] Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.656881 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.762882 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-inventory\") pod \"90647bce-161d-4a56-86bc-662a69916664\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.763250 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-ssh-key-openstack-edpm-ipam\") pod \"90647bce-161d-4a56-86bc-662a69916664\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.763303 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz4dx\" (UniqueName: \"kubernetes.io/projected/90647bce-161d-4a56-86bc-662a69916664-kube-api-access-kz4dx\") pod \"90647bce-161d-4a56-86bc-662a69916664\" (UID: \"90647bce-161d-4a56-86bc-662a69916664\") " Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.768903 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90647bce-161d-4a56-86bc-662a69916664-kube-api-access-kz4dx" (OuterVolumeSpecName: "kube-api-access-kz4dx") pod "90647bce-161d-4a56-86bc-662a69916664" (UID: "90647bce-161d-4a56-86bc-662a69916664"). InnerVolumeSpecName "kube-api-access-kz4dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.796813 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90647bce-161d-4a56-86bc-662a69916664" (UID: "90647bce-161d-4a56-86bc-662a69916664"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.812949 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-inventory" (OuterVolumeSpecName: "inventory") pod "90647bce-161d-4a56-86bc-662a69916664" (UID: "90647bce-161d-4a56-86bc-662a69916664"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.865622 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.865663 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90647bce-161d-4a56-86bc-662a69916664-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:27:14 crc kubenswrapper[4910]: I0226 22:27:14.865676 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz4dx\" (UniqueName: \"kubernetes.io/projected/90647bce-161d-4a56-86bc-662a69916664-kube-api-access-kz4dx\") on node \"crc\" DevicePath \"\"" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.107495 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" event={"ID":"90647bce-161d-4a56-86bc-662a69916664","Type":"ContainerDied","Data":"66e815d0be8238201af19271a2d8097037493145661004f3ac1b054c92dd4e3d"} Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.107754 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e815d0be8238201af19271a2d8097037493145661004f3ac1b054c92dd4e3d" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.107552 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.171414 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx"] Feb 26 22:27:15 crc kubenswrapper[4910]: E0226 22:27:15.171970 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90647bce-161d-4a56-86bc-662a69916664" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.171996 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="90647bce-161d-4a56-86bc-662a69916664" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 22:27:15 crc kubenswrapper[4910]: E0226 22:27:15.172018 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01704631-8b12-4b09-932f-fd922af31259" containerName="oc" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.172028 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="01704631-8b12-4b09-932f-fd922af31259" containerName="oc" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.172296 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="01704631-8b12-4b09-932f-fd922af31259" containerName="oc" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.172345 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="90647bce-161d-4a56-86bc-662a69916664" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.173237 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.176594 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.177441 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx"] Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.178051 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.178251 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.178429 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.274607 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.274668 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2s6s\" (UniqueName: \"kubernetes.io/projected/f521e05c-2c07-434a-8e61-40ba33038794-kube-api-access-m2s6s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.274920 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.376519 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.376584 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2s6s\" (UniqueName: \"kubernetes.io/projected/f521e05c-2c07-434a-8e61-40ba33038794-kube-api-access-m2s6s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.376646 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.381750 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.384868 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.395393 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2s6s\" (UniqueName: \"kubernetes.io/projected/f521e05c-2c07-434a-8e61-40ba33038794-kube-api-access-m2s6s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.495827 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.920812 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:27:15 crc kubenswrapper[4910]: E0226 22:27:15.921554 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:27:15 crc kubenswrapper[4910]: I0226 22:27:15.932764 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0474fe2b-3094-4fd5-8f3a-1e9124acb82a" path="/var/lib/kubelet/pods/0474fe2b-3094-4fd5-8f3a-1e9124acb82a/volumes" Feb 26 22:27:16 crc kubenswrapper[4910]: I0226 22:27:16.187975 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:27:16 crc kubenswrapper[4910]: I0226 22:27:16.192986 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx"] Feb 26 22:27:17 crc kubenswrapper[4910]: I0226 22:27:17.137947 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" event={"ID":"f521e05c-2c07-434a-8e61-40ba33038794","Type":"ContainerStarted","Data":"4bdcd48089a7b58b3956aaba3fa8e9d338dff597e220469539972fb7bde25f9d"} Feb 26 22:27:17 crc kubenswrapper[4910]: I0226 22:27:17.200944 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:27:18 crc kubenswrapper[4910]: I0226 22:27:18.170639 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" event={"ID":"f521e05c-2c07-434a-8e61-40ba33038794","Type":"ContainerStarted","Data":"c0f852c348eaaba634ca1b34b7be259e003e7f1930cc7233041458fb5cb8520f"} Feb 26 22:27:18 crc kubenswrapper[4910]: I0226 22:27:18.204104 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" podStartSLOduration=2.195028954 podStartE2EDuration="3.204087503s" podCreationTimestamp="2026-02-26 22:27:15 +0000 UTC" firstStartedPulling="2026-02-26 22:27:16.187540178 +0000 UTC m=+1921.267030729" lastFinishedPulling="2026-02-26 22:27:17.196598737 +0000 UTC m=+1922.276089278" observedRunningTime="2026-02-26 22:27:18.203365113 +0000 UTC m=+1923.282855694" watchObservedRunningTime="2026-02-26 22:27:18.204087503 +0000 UTC m=+1923.283578044" Feb 26 22:27:27 crc kubenswrapper[4910]: I0226 22:27:27.902844 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:27:27 crc kubenswrapper[4910]: E0226 22:27:27.903822 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:27:32 crc kubenswrapper[4910]: I0226 22:27:32.710133 4910 scope.go:117] "RemoveContainer" containerID="4b4e20de794ae8185d7e74f9c004ba3b4be045d2a8e34cb7e2c94859b15cd272" Feb 26 22:27:32 crc kubenswrapper[4910]: I0226 22:27:32.750010 4910 scope.go:117] "RemoveContainer" containerID="9153d35c2b4d119652b5efda1508b437585610285dafefd33403f7caa2b25872" Feb 26 22:27:32 crc kubenswrapper[4910]: I0226 22:27:32.820752 4910 scope.go:117] "RemoveContainer" containerID="c2509f0feb5a78a50198f375a96870f2e809c20dd901317266094bc9b17dd888" Feb 26 22:27:32 crc kubenswrapper[4910]: I0226 22:27:32.872841 4910 scope.go:117] "RemoveContainer" containerID="4c719b86b40a17409845ae4bab97583caef70962ff8219608829595edb21d6e7" Feb 26 22:27:32 crc kubenswrapper[4910]: I0226 22:27:32.916724 4910 scope.go:117] "RemoveContainer" containerID="1ddc8fe97c6e8d049929c276f532f5e90094b29fb205f36691dbacfec23444e6" Feb 26 22:27:32 crc kubenswrapper[4910]: I0226 22:27:32.958896 4910 scope.go:117] "RemoveContainer" containerID="89fc9b3aa4e08125f994d6b09f72fbb29dbafaab15e6d529088df229db73a802" Feb 26 22:27:33 crc kubenswrapper[4910]: I0226 22:27:33.001993 4910 scope.go:117] "RemoveContainer" containerID="f7d047433e6a305a7f2810f8182e6e30f3993fe0c3ba9853be094bd7ded8835d" Feb 26 22:27:33 crc kubenswrapper[4910]: I0226 22:27:33.045490 4910 scope.go:117] "RemoveContainer" containerID="293fd214ee91f4bc22769fac129adbe53eb861dfd62123477752fdf57a1cdec2" Feb 26 22:27:33 crc kubenswrapper[4910]: I0226 22:27:33.076964 4910 scope.go:117] "RemoveContainer" containerID="ead4b74f299d8d539aed77abf42ac189d2b0388f7802f1f35fab7978780ab19b" Feb 26 22:27:33 crc kubenswrapper[4910]: I0226 22:27:33.099440 4910 scope.go:117] "RemoveContainer" containerID="82983bb1ce663ffcc1cee79ff7d1da673eec49353b7972c7be71c56c2a082cf7" Feb 26 22:27:33 crc kubenswrapper[4910]: I0226 22:27:33.122539 4910 scope.go:117] "RemoveContainer" containerID="13844c402da3269ec4a9b09ca8a1b0440abcd52e0735173e624113f4963b8379" Feb 26 22:27:33 crc kubenswrapper[4910]: I0226 22:27:33.164654 4910 scope.go:117] "RemoveContainer" containerID="6e1ccaf85888b10f9e6db901f2fce1330456bc4d1a111f44ac1cc0bd551d4d40" Feb 26 22:27:39 crc kubenswrapper[4910]: I0226 22:27:39.047598 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2hp89"] Feb 26 22:27:39 crc kubenswrapper[4910]: I0226 22:27:39.062134 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lxj26"] Feb 26 22:27:39 crc kubenswrapper[4910]: I0226 22:27:39.071792 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2hp89"] Feb 26 22:27:39 crc kubenswrapper[4910]: I0226 22:27:39.088441 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cwz4m"] Feb 26 22:27:39 crc kubenswrapper[4910]: I0226 22:27:39.098836 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lxj26"] Feb 26 22:27:39 crc kubenswrapper[4910]: I0226 22:27:39.114609 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cwz4m"] Feb 26 22:27:39 crc kubenswrapper[4910]: I0226 22:27:39.918018 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f" path="/var/lib/kubelet/pods/5232ebb8-e265-4f9a-8c2c-9f31f5dbb46f/volumes" Feb 26 22:27:39 crc kubenswrapper[4910]: I0226 22:27:39.920423 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f067fa-7653-4dd7-93ee-bef006c01109" path="/var/lib/kubelet/pods/58f067fa-7653-4dd7-93ee-bef006c01109/volumes" Feb 26 22:27:39 crc kubenswrapper[4910]: I0226 22:27:39.921547 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb12d5b-0ec7-48d5-b1ef-9e378c030b75" path="/var/lib/kubelet/pods/eeb12d5b-0ec7-48d5-b1ef-9e378c030b75/volumes" Feb 26 22:27:40 crc kubenswrapper[4910]: I0226 22:27:40.031809 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6xcs7"] Feb 26 22:27:40 crc kubenswrapper[4910]: I0226 22:27:40.040920 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6xcs7"] Feb 26 22:27:40 crc kubenswrapper[4910]: I0226 22:27:40.902056 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:27:40 crc kubenswrapper[4910]: E0226 22:27:40.902557 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:27:41 crc kubenswrapper[4910]: I0226 22:27:41.924352 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d861622f-ed9a-4709-824c-bb291c4639a5" path="/var/lib/kubelet/pods/d861622f-ed9a-4709-824c-bb291c4639a5/volumes" Feb 26 22:27:54 crc kubenswrapper[4910]: I0226 22:27:54.901054 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:27:54 crc kubenswrapper[4910]: E0226 22:27:54.901945 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.163308 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535748-rnqqr"] Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.165269 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535748-rnqqr" Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.169235 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.169792 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.173417 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.189459 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535748-rnqqr"] Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.284010 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jqqz\" (UniqueName: \"kubernetes.io/projected/465f6383-fe83-4b80-9719-e17933b4054e-kube-api-access-7jqqz\") pod \"auto-csr-approver-29535748-rnqqr\" (UID: \"465f6383-fe83-4b80-9719-e17933b4054e\") " pod="openshift-infra/auto-csr-approver-29535748-rnqqr" Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.386457 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jqqz\" (UniqueName: \"kubernetes.io/projected/465f6383-fe83-4b80-9719-e17933b4054e-kube-api-access-7jqqz\") pod \"auto-csr-approver-29535748-rnqqr\" (UID: \"465f6383-fe83-4b80-9719-e17933b4054e\") " pod="openshift-infra/auto-csr-approver-29535748-rnqqr" Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.409463 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jqqz\" (UniqueName: \"kubernetes.io/projected/465f6383-fe83-4b80-9719-e17933b4054e-kube-api-access-7jqqz\") pod \"auto-csr-approver-29535748-rnqqr\" (UID: \"465f6383-fe83-4b80-9719-e17933b4054e\") " pod="openshift-infra/auto-csr-approver-29535748-rnqqr" Feb 26 22:28:00 crc kubenswrapper[4910]: I0226 22:28:00.486432 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535748-rnqqr" Feb 26 22:28:01 crc kubenswrapper[4910]: I0226 22:28:01.046246 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535748-rnqqr"] Feb 26 22:28:01 crc kubenswrapper[4910]: W0226 22:28:01.057981 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod465f6383_fe83_4b80_9719_e17933b4054e.slice/crio-3d3e1b2bc03dade76de342cf1c280a262f4e562216a73691143f8e52e70e9924 WatchSource:0}: Error finding container 3d3e1b2bc03dade76de342cf1c280a262f4e562216a73691143f8e52e70e9924: Status 404 returned error can't find the container with id 3d3e1b2bc03dade76de342cf1c280a262f4e562216a73691143f8e52e70e9924 Feb 26 22:28:01 crc kubenswrapper[4910]: I0226 22:28:01.671738 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535748-rnqqr" event={"ID":"465f6383-fe83-4b80-9719-e17933b4054e","Type":"ContainerStarted","Data":"3d3e1b2bc03dade76de342cf1c280a262f4e562216a73691143f8e52e70e9924"} Feb 26 22:28:02 crc kubenswrapper[4910]: I0226 22:28:02.685780 4910 generic.go:334] "Generic (PLEG): container finished" podID="465f6383-fe83-4b80-9719-e17933b4054e" containerID="0d0aacb80788c60b42b841b26cb53e6fa314bd2ad4d419473fdc411cecc6a676" exitCode=0 Feb 26 22:28:02 crc kubenswrapper[4910]: I0226 22:28:02.686338 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535748-rnqqr" event={"ID":"465f6383-fe83-4b80-9719-e17933b4054e","Type":"ContainerDied","Data":"0d0aacb80788c60b42b841b26cb53e6fa314bd2ad4d419473fdc411cecc6a676"} Feb 26 22:28:04 crc kubenswrapper[4910]: I0226 22:28:04.154329 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535748-rnqqr" Feb 26 22:28:04 crc kubenswrapper[4910]: I0226 22:28:04.283895 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jqqz\" (UniqueName: \"kubernetes.io/projected/465f6383-fe83-4b80-9719-e17933b4054e-kube-api-access-7jqqz\") pod \"465f6383-fe83-4b80-9719-e17933b4054e\" (UID: \"465f6383-fe83-4b80-9719-e17933b4054e\") " Feb 26 22:28:04 crc kubenswrapper[4910]: I0226 22:28:04.291722 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465f6383-fe83-4b80-9719-e17933b4054e-kube-api-access-7jqqz" (OuterVolumeSpecName: "kube-api-access-7jqqz") pod "465f6383-fe83-4b80-9719-e17933b4054e" (UID: "465f6383-fe83-4b80-9719-e17933b4054e"). InnerVolumeSpecName "kube-api-access-7jqqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:28:04 crc kubenswrapper[4910]: I0226 22:28:04.388131 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jqqz\" (UniqueName: \"kubernetes.io/projected/465f6383-fe83-4b80-9719-e17933b4054e-kube-api-access-7jqqz\") on node \"crc\" DevicePath \"\"" Feb 26 22:28:04 crc kubenswrapper[4910]: I0226 22:28:04.715486 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535748-rnqqr" event={"ID":"465f6383-fe83-4b80-9719-e17933b4054e","Type":"ContainerDied","Data":"3d3e1b2bc03dade76de342cf1c280a262f4e562216a73691143f8e52e70e9924"} Feb 26 22:28:04 crc kubenswrapper[4910]: I0226 22:28:04.715530 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d3e1b2bc03dade76de342cf1c280a262f4e562216a73691143f8e52e70e9924" Feb 26 22:28:04 crc kubenswrapper[4910]: I0226 22:28:04.715610 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535748-rnqqr" Feb 26 22:28:05 crc kubenswrapper[4910]: I0226 22:28:05.249605 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535742-trflz"] Feb 26 22:28:05 crc kubenswrapper[4910]: I0226 22:28:05.259732 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535742-trflz"] Feb 26 22:28:05 crc kubenswrapper[4910]: I0226 22:28:05.914787 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:28:05 crc kubenswrapper[4910]: E0226 22:28:05.915055 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:28:05 crc kubenswrapper[4910]: I0226 22:28:05.923707 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="463a94d8-3449-488b-97e1-80044ca36e5a" path="/var/lib/kubelet/pods/463a94d8-3449-488b-97e1-80044ca36e5a/volumes" Feb 26 22:28:18 crc kubenswrapper[4910]: I0226 22:28:18.901776 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:28:18 crc kubenswrapper[4910]: E0226 22:28:18.902898 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:28:23 crc kubenswrapper[4910]: I0226 22:28:23.035945 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-r8zdl"] Feb 26 22:28:23 crc kubenswrapper[4910]: I0226 22:28:23.047729 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-r8zdl"] Feb 26 22:28:23 crc kubenswrapper[4910]: I0226 22:28:23.917455 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316f1fec-ce34-4d56-81c2-2500efe83251" path="/var/lib/kubelet/pods/316f1fec-ce34-4d56-81c2-2500efe83251/volumes" Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.072398 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6vgtc"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.085350 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6vgtc"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.094946 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7tdc4"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.107123 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8cae-account-create-update-48ftl"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.115997 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-361a-account-create-update-c844r"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.124733 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-09b2-account-create-update-dslwp"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.134967 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-09b2-account-create-update-dslwp"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.147536 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8cae-account-create-update-48ftl"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.157271 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7tdc4"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.170052 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-361a-account-create-update-c844r"] Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.918235 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0003bc46-cac8-43e2-af3d-8018cfd4ab2d" path="/var/lib/kubelet/pods/0003bc46-cac8-43e2-af3d-8018cfd4ab2d/volumes" Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.919321 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28366851-2ead-459f-a372-eb59160e674a" path="/var/lib/kubelet/pods/28366851-2ead-459f-a372-eb59160e674a/volumes" Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.920052 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354e979d-961b-468d-bd22-d3779ddf79e7" path="/var/lib/kubelet/pods/354e979d-961b-468d-bd22-d3779ddf79e7/volumes" Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.920867 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bee5f61-2821-472a-806f-2474c1171a27" path="/var/lib/kubelet/pods/3bee5f61-2821-472a-806f-2474c1171a27/volumes" Feb 26 22:28:29 crc kubenswrapper[4910]: I0226 22:28:29.922302 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed5b19d3-3822-4ad6-bece-dae55bdafa17" path="/var/lib/kubelet/pods/ed5b19d3-3822-4ad6-bece-dae55bdafa17/volumes" Feb 26 22:28:31 crc kubenswrapper[4910]: I0226 22:28:31.031334 4910 generic.go:334] "Generic (PLEG): container finished" podID="f521e05c-2c07-434a-8e61-40ba33038794" containerID="c0f852c348eaaba634ca1b34b7be259e003e7f1930cc7233041458fb5cb8520f" exitCode=0 Feb 26 22:28:31 crc kubenswrapper[4910]: I0226 22:28:31.031391 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" event={"ID":"f521e05c-2c07-434a-8e61-40ba33038794","Type":"ContainerDied","Data":"c0f852c348eaaba634ca1b34b7be259e003e7f1930cc7233041458fb5cb8520f"} Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.646675 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.720126 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-ssh-key-openstack-edpm-ipam\") pod \"f521e05c-2c07-434a-8e61-40ba33038794\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.720494 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-inventory\") pod \"f521e05c-2c07-434a-8e61-40ba33038794\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.720566 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2s6s\" (UniqueName: \"kubernetes.io/projected/f521e05c-2c07-434a-8e61-40ba33038794-kube-api-access-m2s6s\") pod \"f521e05c-2c07-434a-8e61-40ba33038794\" (UID: \"f521e05c-2c07-434a-8e61-40ba33038794\") " Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.725598 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f521e05c-2c07-434a-8e61-40ba33038794-kube-api-access-m2s6s" (OuterVolumeSpecName: "kube-api-access-m2s6s") pod "f521e05c-2c07-434a-8e61-40ba33038794" (UID: "f521e05c-2c07-434a-8e61-40ba33038794"). InnerVolumeSpecName "kube-api-access-m2s6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.752492 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f521e05c-2c07-434a-8e61-40ba33038794" (UID: "f521e05c-2c07-434a-8e61-40ba33038794"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.759334 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-inventory" (OuterVolumeSpecName: "inventory") pod "f521e05c-2c07-434a-8e61-40ba33038794" (UID: "f521e05c-2c07-434a-8e61-40ba33038794"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.823006 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2s6s\" (UniqueName: \"kubernetes.io/projected/f521e05c-2c07-434a-8e61-40ba33038794-kube-api-access-m2s6s\") on node \"crc\" DevicePath \"\"" Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.823054 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.823076 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f521e05c-2c07-434a-8e61-40ba33038794-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:28:32 crc kubenswrapper[4910]: I0226 22:28:32.901864 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:28:32 crc kubenswrapper[4910]: E0226 22:28:32.902507 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.059478 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" event={"ID":"f521e05c-2c07-434a-8e61-40ba33038794","Type":"ContainerDied","Data":"4bdcd48089a7b58b3956aaba3fa8e9d338dff597e220469539972fb7bde25f9d"} Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.059540 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.059552 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bdcd48089a7b58b3956aaba3fa8e9d338dff597e220469539972fb7bde25f9d" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.201703 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n"] Feb 26 22:28:33 crc kubenswrapper[4910]: E0226 22:28:33.202521 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465f6383-fe83-4b80-9719-e17933b4054e" containerName="oc" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.202560 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="465f6383-fe83-4b80-9719-e17933b4054e" containerName="oc" Feb 26 22:28:33 crc kubenswrapper[4910]: E0226 22:28:33.202590 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f521e05c-2c07-434a-8e61-40ba33038794" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.202608 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f521e05c-2c07-434a-8e61-40ba33038794" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.203022 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="465f6383-fe83-4b80-9719-e17933b4054e" containerName="oc" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.203088 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f521e05c-2c07-434a-8e61-40ba33038794" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.204316 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.212827 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.213152 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.213520 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.213795 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.231836 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n"] Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.336450 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsgd\" (UniqueName: \"kubernetes.io/projected/0b294c44-bbde-4d8f-bedc-992f4df703e8-kube-api-access-kvsgd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zs82n\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.336546 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zs82n\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.336627 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zs82n\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.439323 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zs82n\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.439444 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zs82n\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.439625 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvsgd\" (UniqueName: \"kubernetes.io/projected/0b294c44-bbde-4d8f-bedc-992f4df703e8-kube-api-access-kvsgd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zs82n\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.450354 4910 scope.go:117] "RemoveContainer" containerID="fd38f207fbee22a39fa98c596b78d42b75115d5ee133b0aeedf14aa4c2464cd3" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.453854 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zs82n\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.458003 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvsgd\" (UniqueName: \"kubernetes.io/projected/0b294c44-bbde-4d8f-bedc-992f4df703e8-kube-api-access-kvsgd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zs82n\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.463177 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zs82n\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.550707 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.563696 4910 scope.go:117] "RemoveContainer" containerID="19dda5e010d2d61595446b6b0193bc3d9a4506c7feafcb0975dd07191a5637bf" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.699637 4910 scope.go:117] "RemoveContainer" containerID="19bed303ebb4ecb19af5c18cb7029856e4ab71817c80fa9be6c0ddbd504edf8c" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.743455 4910 scope.go:117] "RemoveContainer" containerID="759f7ee2eb99817164d26f3e0cfdc42e43888121aea7bce3e2319bea0d130c4f" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.799571 4910 scope.go:117] "RemoveContainer" containerID="cff52b7f3641cd791bf33a32cc3edaeede0c73e08a41cbffdd8ff855d4a79f75" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.851935 4910 scope.go:117] "RemoveContainer" containerID="04584fc1579d7d9f0eff0239465a96942c1774112a9abd7c8644a2a0c2debb4c" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.874015 4910 scope.go:117] "RemoveContainer" containerID="da0a17571397fe5aa598a3a2cbd6c2b4bf2e699e5748ea9ba6c6e2e48c434358" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.899243 4910 scope.go:117] "RemoveContainer" containerID="84226330c05b1bc831c7c83ef434984cca7b626b02a8b79c806c9c5dc586299c" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.933907 4910 scope.go:117] "RemoveContainer" containerID="abfa55ec3c7063c27c5b19141a258688bc3368d72fae40a169336276f35f6e50" Feb 26 22:28:33 crc kubenswrapper[4910]: I0226 22:28:33.961150 4910 scope.go:117] "RemoveContainer" containerID="ce0f9b7cd90d804dd9d769cec11222c53820d0652e3854cad4a0c9e5a5733757" Feb 26 22:28:34 crc kubenswrapper[4910]: I0226 22:28:34.004706 4910 scope.go:117] "RemoveContainer" containerID="e5084e2799e64984714eebda9fdfb632754a254d21eca853ae64bf19c293fa06" Feb 26 22:28:34 crc kubenswrapper[4910]: I0226 22:28:34.165585 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n"] Feb 26 22:28:35 crc kubenswrapper[4910]: I0226 22:28:35.128868 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" event={"ID":"0b294c44-bbde-4d8f-bedc-992f4df703e8","Type":"ContainerStarted","Data":"bfd2a3884e75a698a0936be1ea0de5aeee35d85da9b826765438a48fceefcfae"} Feb 26 22:28:35 crc kubenswrapper[4910]: I0226 22:28:35.129399 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" event={"ID":"0b294c44-bbde-4d8f-bedc-992f4df703e8","Type":"ContainerStarted","Data":"728a550ffb163c3b7a2e470d9c7ac2ab06558638b189991c9f6002f2dd028a5a"} Feb 26 22:28:35 crc kubenswrapper[4910]: I0226 22:28:35.157791 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" podStartSLOduration=1.7396963680000002 podStartE2EDuration="2.15776159s" podCreationTimestamp="2026-02-26 22:28:33 +0000 UTC" firstStartedPulling="2026-02-26 22:28:34.188477484 +0000 UTC m=+1999.267968045" lastFinishedPulling="2026-02-26 22:28:34.606542716 +0000 UTC m=+1999.686033267" observedRunningTime="2026-02-26 22:28:35.149736392 +0000 UTC m=+2000.229226983" watchObservedRunningTime="2026-02-26 22:28:35.15776159 +0000 UTC m=+2000.237252141" Feb 26 22:28:40 crc kubenswrapper[4910]: I0226 22:28:40.184514 4910 generic.go:334] "Generic (PLEG): container finished" podID="0b294c44-bbde-4d8f-bedc-992f4df703e8" containerID="bfd2a3884e75a698a0936be1ea0de5aeee35d85da9b826765438a48fceefcfae" exitCode=0 Feb 26 22:28:40 crc kubenswrapper[4910]: I0226 22:28:40.184724 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" event={"ID":"0b294c44-bbde-4d8f-bedc-992f4df703e8","Type":"ContainerDied","Data":"bfd2a3884e75a698a0936be1ea0de5aeee35d85da9b826765438a48fceefcfae"} Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.787352 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.836033 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-ssh-key-openstack-edpm-ipam\") pod \"0b294c44-bbde-4d8f-bedc-992f4df703e8\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.836219 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-inventory\") pod \"0b294c44-bbde-4d8f-bedc-992f4df703e8\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.836266 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvsgd\" (UniqueName: \"kubernetes.io/projected/0b294c44-bbde-4d8f-bedc-992f4df703e8-kube-api-access-kvsgd\") pod \"0b294c44-bbde-4d8f-bedc-992f4df703e8\" (UID: \"0b294c44-bbde-4d8f-bedc-992f4df703e8\") " Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.845310 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b294c44-bbde-4d8f-bedc-992f4df703e8-kube-api-access-kvsgd" (OuterVolumeSpecName: "kube-api-access-kvsgd") pod "0b294c44-bbde-4d8f-bedc-992f4df703e8" (UID: "0b294c44-bbde-4d8f-bedc-992f4df703e8"). InnerVolumeSpecName "kube-api-access-kvsgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.878475 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-inventory" (OuterVolumeSpecName: "inventory") pod "0b294c44-bbde-4d8f-bedc-992f4df703e8" (UID: "0b294c44-bbde-4d8f-bedc-992f4df703e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.879340 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b294c44-bbde-4d8f-bedc-992f4df703e8" (UID: "0b294c44-bbde-4d8f-bedc-992f4df703e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.939943 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.940016 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b294c44-bbde-4d8f-bedc-992f4df703e8-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:28:41 crc kubenswrapper[4910]: I0226 22:28:41.940033 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvsgd\" (UniqueName: \"kubernetes.io/projected/0b294c44-bbde-4d8f-bedc-992f4df703e8-kube-api-access-kvsgd\") on node \"crc\" DevicePath \"\"" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.212080 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" event={"ID":"0b294c44-bbde-4d8f-bedc-992f4df703e8","Type":"ContainerDied","Data":"728a550ffb163c3b7a2e470d9c7ac2ab06558638b189991c9f6002f2dd028a5a"} Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.212248 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="728a550ffb163c3b7a2e470d9c7ac2ab06558638b189991c9f6002f2dd028a5a" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.212359 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zs82n" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.301365 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp"] Feb 26 22:28:42 crc kubenswrapper[4910]: E0226 22:28:42.302365 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b294c44-bbde-4d8f-bedc-992f4df703e8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.302417 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b294c44-bbde-4d8f-bedc-992f4df703e8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.303031 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b294c44-bbde-4d8f-bedc-992f4df703e8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.305034 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.309348 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.311272 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.311589 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.311653 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.318424 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp"] Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.349402 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-q7fhp\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.349674 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-q7fhp\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.349975 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvxf\" (UniqueName: \"kubernetes.io/projected/f308d423-5b01-4866-9ea0-b48746b243b5-kube-api-access-6rvxf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-q7fhp\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.452448 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-q7fhp\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.452565 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvxf\" (UniqueName: \"kubernetes.io/projected/f308d423-5b01-4866-9ea0-b48746b243b5-kube-api-access-6rvxf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-q7fhp\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.452723 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-q7fhp\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.458180 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-q7fhp\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.459899 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-q7fhp\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.472302 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvxf\" (UniqueName: \"kubernetes.io/projected/f308d423-5b01-4866-9ea0-b48746b243b5-kube-api-access-6rvxf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-q7fhp\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:42 crc kubenswrapper[4910]: I0226 22:28:42.627208 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:28:43 crc kubenswrapper[4910]: I0226 22:28:43.250728 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp"] Feb 26 22:28:43 crc kubenswrapper[4910]: W0226 22:28:43.267715 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf308d423_5b01_4866_9ea0_b48746b243b5.slice/crio-c578fd7a1f7e869f70d67afd4b1b7bfd1abc88727c82d7dc936fcb201d59a855 WatchSource:0}: Error finding container c578fd7a1f7e869f70d67afd4b1b7bfd1abc88727c82d7dc936fcb201d59a855: Status 404 returned error can't find the container with id c578fd7a1f7e869f70d67afd4b1b7bfd1abc88727c82d7dc936fcb201d59a855 Feb 26 22:28:44 crc kubenswrapper[4910]: I0226 22:28:44.242955 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" event={"ID":"f308d423-5b01-4866-9ea0-b48746b243b5","Type":"ContainerStarted","Data":"98932e6fc5d5cff3e285eae9937dc4f4926619adad3f9477153f828145e42b69"} Feb 26 22:28:44 crc kubenswrapper[4910]: I0226 22:28:44.243329 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" event={"ID":"f308d423-5b01-4866-9ea0-b48746b243b5","Type":"ContainerStarted","Data":"c578fd7a1f7e869f70d67afd4b1b7bfd1abc88727c82d7dc936fcb201d59a855"} Feb 26 22:28:44 crc kubenswrapper[4910]: I0226 22:28:44.258653 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" podStartSLOduration=1.830335787 podStartE2EDuration="2.258631256s" podCreationTimestamp="2026-02-26 22:28:42 +0000 UTC" firstStartedPulling="2026-02-26 22:28:43.272105801 +0000 UTC m=+2008.351596342" lastFinishedPulling="2026-02-26 22:28:43.70040123 +0000 UTC m=+2008.779891811" observedRunningTime="2026-02-26 22:28:44.258483522 +0000 UTC m=+2009.337974063" watchObservedRunningTime="2026-02-26 22:28:44.258631256 +0000 UTC m=+2009.338121817" Feb 26 22:28:47 crc kubenswrapper[4910]: I0226 22:28:47.902505 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:28:47 crc kubenswrapper[4910]: E0226 22:28:47.903418 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:28:59 crc kubenswrapper[4910]: I0226 22:28:59.902350 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:28:59 crc kubenswrapper[4910]: E0226 22:28:59.903386 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:29:02 crc kubenswrapper[4910]: I0226 22:29:02.066141 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x9cfd"] Feb 26 22:29:02 crc kubenswrapper[4910]: I0226 22:29:02.086208 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x9cfd"] Feb 26 22:29:03 crc kubenswrapper[4910]: I0226 22:29:03.956432 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffed18bb-3818-4692-927b-daa85a2ea2e9" path="/var/lib/kubelet/pods/ffed18bb-3818-4692-927b-daa85a2ea2e9/volumes" Feb 26 22:29:10 crc kubenswrapper[4910]: I0226 22:29:10.902248 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:29:10 crc kubenswrapper[4910]: E0226 22:29:10.903307 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:29:23 crc kubenswrapper[4910]: I0226 22:29:23.698884 4910 generic.go:334] "Generic (PLEG): container finished" podID="f308d423-5b01-4866-9ea0-b48746b243b5" containerID="98932e6fc5d5cff3e285eae9937dc4f4926619adad3f9477153f828145e42b69" exitCode=0 Feb 26 22:29:23 crc kubenswrapper[4910]: I0226 22:29:23.698979 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" event={"ID":"f308d423-5b01-4866-9ea0-b48746b243b5","Type":"ContainerDied","Data":"98932e6fc5d5cff3e285eae9937dc4f4926619adad3f9477153f828145e42b69"} Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.269422 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.465589 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rvxf\" (UniqueName: \"kubernetes.io/projected/f308d423-5b01-4866-9ea0-b48746b243b5-kube-api-access-6rvxf\") pod \"f308d423-5b01-4866-9ea0-b48746b243b5\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.465914 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-inventory\") pod \"f308d423-5b01-4866-9ea0-b48746b243b5\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.466168 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-ssh-key-openstack-edpm-ipam\") pod \"f308d423-5b01-4866-9ea0-b48746b243b5\" (UID: \"f308d423-5b01-4866-9ea0-b48746b243b5\") " Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.483949 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f308d423-5b01-4866-9ea0-b48746b243b5-kube-api-access-6rvxf" (OuterVolumeSpecName: "kube-api-access-6rvxf") pod "f308d423-5b01-4866-9ea0-b48746b243b5" (UID: "f308d423-5b01-4866-9ea0-b48746b243b5"). InnerVolumeSpecName "kube-api-access-6rvxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.499272 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-inventory" (OuterVolumeSpecName: "inventory") pod "f308d423-5b01-4866-9ea0-b48746b243b5" (UID: "f308d423-5b01-4866-9ea0-b48746b243b5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.515287 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f308d423-5b01-4866-9ea0-b48746b243b5" (UID: "f308d423-5b01-4866-9ea0-b48746b243b5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.569098 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.569142 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rvxf\" (UniqueName: \"kubernetes.io/projected/f308d423-5b01-4866-9ea0-b48746b243b5-kube-api-access-6rvxf\") on node \"crc\" DevicePath \"\"" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.569155 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f308d423-5b01-4866-9ea0-b48746b243b5-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.720002 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" event={"ID":"f308d423-5b01-4866-9ea0-b48746b243b5","Type":"ContainerDied","Data":"c578fd7a1f7e869f70d67afd4b1b7bfd1abc88727c82d7dc936fcb201d59a855"} Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.720042 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c578fd7a1f7e869f70d67afd4b1b7bfd1abc88727c82d7dc936fcb201d59a855" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.720103 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-q7fhp" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.887997 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q"] Feb 26 22:29:25 crc kubenswrapper[4910]: E0226 22:29:25.888638 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f308d423-5b01-4866-9ea0-b48746b243b5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.888662 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f308d423-5b01-4866-9ea0-b48746b243b5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.888945 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f308d423-5b01-4866-9ea0-b48746b243b5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.889998 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.892101 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.892398 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.892833 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.901812 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:29:25 crc kubenswrapper[4910]: E0226 22:29:25.902076 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.912176 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:29:25 crc kubenswrapper[4910]: I0226 22:29:25.925972 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q"] Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.078334 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.078643 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.079898 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zhtq\" (UniqueName: \"kubernetes.io/projected/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-kube-api-access-7zhtq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.182773 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zhtq\" (UniqueName: \"kubernetes.io/projected/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-kube-api-access-7zhtq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.182880 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.182935 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.187109 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.195399 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.201124 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zhtq\" (UniqueName: \"kubernetes.io/projected/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-kube-api-access-7zhtq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.205208 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:29:26 crc kubenswrapper[4910]: I0226 22:29:26.783038 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q"] Feb 26 22:29:27 crc kubenswrapper[4910]: I0226 22:29:27.740369 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" event={"ID":"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195","Type":"ContainerStarted","Data":"8fbe156842c08488a45d15ac360f7d20ccd3b2dd3678dbd0e695e8acf3dc9a2b"} Feb 26 22:29:27 crc kubenswrapper[4910]: I0226 22:29:27.740766 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" event={"ID":"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195","Type":"ContainerStarted","Data":"422bb18ff9caa18d251f46384e1f1506baa6f4c44ad43c2415082efb2eb310ac"} Feb 26 22:29:27 crc kubenswrapper[4910]: I0226 22:29:27.757729 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" podStartSLOduration=2.241007272 podStartE2EDuration="2.757713118s" podCreationTimestamp="2026-02-26 22:29:25 +0000 UTC" firstStartedPulling="2026-02-26 22:29:26.788580985 +0000 UTC m=+2051.868071526" lastFinishedPulling="2026-02-26 22:29:27.305286831 +0000 UTC m=+2052.384777372" observedRunningTime="2026-02-26 22:29:27.755382704 +0000 UTC m=+2052.834873255" watchObservedRunningTime="2026-02-26 22:29:27.757713118 +0000 UTC m=+2052.837203659" Feb 26 22:29:34 crc kubenswrapper[4910]: I0226 22:29:34.268206 4910 scope.go:117] "RemoveContainer" containerID="17c62054dea1ce31fdacd4c749d7c20f9a45358321e2cf4cdaaa3d5b34ee0476" Feb 26 22:29:36 crc kubenswrapper[4910]: I0226 22:29:36.047202 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-z8wm4"] Feb 26 22:29:36 crc kubenswrapper[4910]: I0226 22:29:36.055937 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-z8wm4"] Feb 26 22:29:37 crc kubenswrapper[4910]: I0226 22:29:37.040823 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xmz7f"] Feb 26 22:29:37 crc kubenswrapper[4910]: I0226 22:29:37.054888 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xmz7f"] Feb 26 22:29:37 crc kubenswrapper[4910]: I0226 22:29:37.915457 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231fa51c-3886-4460-b26b-029dfe9d2166" path="/var/lib/kubelet/pods/231fa51c-3886-4460-b26b-029dfe9d2166/volumes" Feb 26 22:29:37 crc kubenswrapper[4910]: I0226 22:29:37.916282 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a9974e-7def-47e8-b055-fc5412319aca" path="/var/lib/kubelet/pods/c1a9974e-7def-47e8-b055-fc5412319aca/volumes" Feb 26 22:29:38 crc kubenswrapper[4910]: I0226 22:29:38.901969 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:29:38 crc kubenswrapper[4910]: E0226 22:29:38.902307 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:29:53 crc kubenswrapper[4910]: I0226 22:29:53.906949 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:29:53 crc kubenswrapper[4910]: E0226 22:29:53.907757 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.144645 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535750-8n2rt"] Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.146446 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535750-8n2rt" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.148960 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.149090 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.149375 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.156213 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t"] Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.166786 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.169013 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.170752 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.172435 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535750-8n2rt"] Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.212781 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t"] Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.236036 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-config-volume\") pod \"collect-profiles-29535750-vn97t\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.236478 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwgw\" (UniqueName: \"kubernetes.io/projected/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-kube-api-access-hdwgw\") pod \"collect-profiles-29535750-vn97t\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.236594 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvm2h\" (UniqueName: \"kubernetes.io/projected/cf549ee0-b6a6-4060-8dfe-2477749974d8-kube-api-access-mvm2h\") pod \"auto-csr-approver-29535750-8n2rt\" (UID: \"cf549ee0-b6a6-4060-8dfe-2477749974d8\") " pod="openshift-infra/auto-csr-approver-29535750-8n2rt" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.236672 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-secret-volume\") pod \"collect-profiles-29535750-vn97t\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.338549 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvm2h\" (UniqueName: \"kubernetes.io/projected/cf549ee0-b6a6-4060-8dfe-2477749974d8-kube-api-access-mvm2h\") pod \"auto-csr-approver-29535750-8n2rt\" (UID: \"cf549ee0-b6a6-4060-8dfe-2477749974d8\") " pod="openshift-infra/auto-csr-approver-29535750-8n2rt" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.338598 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-secret-volume\") pod \"collect-profiles-29535750-vn97t\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.338725 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-config-volume\") pod \"collect-profiles-29535750-vn97t\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.338795 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwgw\" (UniqueName: \"kubernetes.io/projected/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-kube-api-access-hdwgw\") pod \"collect-profiles-29535750-vn97t\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.340047 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-config-volume\") pod \"collect-profiles-29535750-vn97t\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.347051 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-secret-volume\") pod \"collect-profiles-29535750-vn97t\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.360265 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwgw\" (UniqueName: \"kubernetes.io/projected/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-kube-api-access-hdwgw\") pod \"collect-profiles-29535750-vn97t\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.364961 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvm2h\" (UniqueName: \"kubernetes.io/projected/cf549ee0-b6a6-4060-8dfe-2477749974d8-kube-api-access-mvm2h\") pod \"auto-csr-approver-29535750-8n2rt\" (UID: \"cf549ee0-b6a6-4060-8dfe-2477749974d8\") " pod="openshift-infra/auto-csr-approver-29535750-8n2rt" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.482385 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535750-8n2rt" Feb 26 22:30:00 crc kubenswrapper[4910]: I0226 22:30:00.503970 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:01 crc kubenswrapper[4910]: I0226 22:30:01.003240 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535750-8n2rt"] Feb 26 22:30:01 crc kubenswrapper[4910]: I0226 22:30:01.103096 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535750-8n2rt" event={"ID":"cf549ee0-b6a6-4060-8dfe-2477749974d8","Type":"ContainerStarted","Data":"9fac7f52f8b80536297275f19ddd9c8b35c0ad72df6128f6ab5302b1f82a966a"} Feb 26 22:30:01 crc kubenswrapper[4910]: I0226 22:30:01.159891 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t"] Feb 26 22:30:01 crc kubenswrapper[4910]: W0226 22:30:01.164979 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod701e4529_3817_4c3d_a6f6_f9ffc4fccf7a.slice/crio-a6398f466ced3df84f50b50e15f3cf676c264db33d3eb9f921dc9b5aa8f513ca WatchSource:0}: Error finding container a6398f466ced3df84f50b50e15f3cf676c264db33d3eb9f921dc9b5aa8f513ca: Status 404 returned error can't find the container with id a6398f466ced3df84f50b50e15f3cf676c264db33d3eb9f921dc9b5aa8f513ca Feb 26 22:30:02 crc kubenswrapper[4910]: I0226 22:30:02.115192 4910 generic.go:334] "Generic (PLEG): container finished" podID="701e4529-3817-4c3d-a6f6-f9ffc4fccf7a" containerID="ba01c7db4092b71256c66a343fc5a0d07c56f19988eb55e81a947442e1cdc4c2" exitCode=0 Feb 26 22:30:02 crc kubenswrapper[4910]: I0226 22:30:02.115409 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" event={"ID":"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a","Type":"ContainerDied","Data":"ba01c7db4092b71256c66a343fc5a0d07c56f19988eb55e81a947442e1cdc4c2"} Feb 26 22:30:02 crc kubenswrapper[4910]: I0226 22:30:02.115494 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" event={"ID":"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a","Type":"ContainerStarted","Data":"a6398f466ced3df84f50b50e15f3cf676c264db33d3eb9f921dc9b5aa8f513ca"} Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.651283 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.822093 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdwgw\" (UniqueName: \"kubernetes.io/projected/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-kube-api-access-hdwgw\") pod \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.822253 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-secret-volume\") pod \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.822330 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-config-volume\") pod \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\" (UID: \"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a\") " Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.823448 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-config-volume" (OuterVolumeSpecName: "config-volume") pod "701e4529-3817-4c3d-a6f6-f9ffc4fccf7a" (UID: "701e4529-3817-4c3d-a6f6-f9ffc4fccf7a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.828883 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "701e4529-3817-4c3d-a6f6-f9ffc4fccf7a" (UID: "701e4529-3817-4c3d-a6f6-f9ffc4fccf7a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.829081 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-kube-api-access-hdwgw" (OuterVolumeSpecName: "kube-api-access-hdwgw") pod "701e4529-3817-4c3d-a6f6-f9ffc4fccf7a" (UID: "701e4529-3817-4c3d-a6f6-f9ffc4fccf7a"). InnerVolumeSpecName "kube-api-access-hdwgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.925684 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdwgw\" (UniqueName: \"kubernetes.io/projected/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-kube-api-access-hdwgw\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.925931 4910 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:03 crc kubenswrapper[4910]: I0226 22:30:03.925944 4910 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/701e4529-3817-4c3d-a6f6-f9ffc4fccf7a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:04 crc kubenswrapper[4910]: I0226 22:30:04.144922 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" event={"ID":"701e4529-3817-4c3d-a6f6-f9ffc4fccf7a","Type":"ContainerDied","Data":"a6398f466ced3df84f50b50e15f3cf676c264db33d3eb9f921dc9b5aa8f513ca"} Feb 26 22:30:04 crc kubenswrapper[4910]: I0226 22:30:04.144964 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6398f466ced3df84f50b50e15f3cf676c264db33d3eb9f921dc9b5aa8f513ca" Feb 26 22:30:04 crc kubenswrapper[4910]: I0226 22:30:04.145013 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535750-vn97t" Feb 26 22:30:04 crc kubenswrapper[4910]: I0226 22:30:04.148675 4910 generic.go:334] "Generic (PLEG): container finished" podID="cf549ee0-b6a6-4060-8dfe-2477749974d8" containerID="e14d070263cd06cef003940fc2bb8c6b7788d4003fd6776f78e3de1617d5d7c3" exitCode=0 Feb 26 22:30:04 crc kubenswrapper[4910]: I0226 22:30:04.148740 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535750-8n2rt" event={"ID":"cf549ee0-b6a6-4060-8dfe-2477749974d8","Type":"ContainerDied","Data":"e14d070263cd06cef003940fc2bb8c6b7788d4003fd6776f78e3de1617d5d7c3"} Feb 26 22:30:04 crc kubenswrapper[4910]: I0226 22:30:04.742036 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q"] Feb 26 22:30:04 crc kubenswrapper[4910]: I0226 22:30:04.752021 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535705-jc29q"] Feb 26 22:30:05 crc kubenswrapper[4910]: I0226 22:30:05.646218 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535750-8n2rt" Feb 26 22:30:05 crc kubenswrapper[4910]: I0226 22:30:05.765930 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvm2h\" (UniqueName: \"kubernetes.io/projected/cf549ee0-b6a6-4060-8dfe-2477749974d8-kube-api-access-mvm2h\") pod \"cf549ee0-b6a6-4060-8dfe-2477749974d8\" (UID: \"cf549ee0-b6a6-4060-8dfe-2477749974d8\") " Feb 26 22:30:05 crc kubenswrapper[4910]: I0226 22:30:05.771626 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf549ee0-b6a6-4060-8dfe-2477749974d8-kube-api-access-mvm2h" (OuterVolumeSpecName: "kube-api-access-mvm2h") pod "cf549ee0-b6a6-4060-8dfe-2477749974d8" (UID: "cf549ee0-b6a6-4060-8dfe-2477749974d8"). InnerVolumeSpecName "kube-api-access-mvm2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:30:05 crc kubenswrapper[4910]: I0226 22:30:05.868587 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvm2h\" (UniqueName: \"kubernetes.io/projected/cf549ee0-b6a6-4060-8dfe-2477749974d8-kube-api-access-mvm2h\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:05 crc kubenswrapper[4910]: I0226 22:30:05.916993 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17437b5-ba61-4630-83bd-8436fcbd659f" path="/var/lib/kubelet/pods/e17437b5-ba61-4630-83bd-8436fcbd659f/volumes" Feb 26 22:30:06 crc kubenswrapper[4910]: I0226 22:30:06.171869 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535750-8n2rt" event={"ID":"cf549ee0-b6a6-4060-8dfe-2477749974d8","Type":"ContainerDied","Data":"9fac7f52f8b80536297275f19ddd9c8b35c0ad72df6128f6ab5302b1f82a966a"} Feb 26 22:30:06 crc kubenswrapper[4910]: I0226 22:30:06.172198 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fac7f52f8b80536297275f19ddd9c8b35c0ad72df6128f6ab5302b1f82a966a" Feb 26 22:30:06 crc kubenswrapper[4910]: I0226 22:30:06.171963 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535750-8n2rt" Feb 26 22:30:06 crc kubenswrapper[4910]: I0226 22:30:06.736539 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535744-47czd"] Feb 26 22:30:06 crc kubenswrapper[4910]: I0226 22:30:06.746504 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535744-47czd"] Feb 26 22:30:07 crc kubenswrapper[4910]: I0226 22:30:07.916796 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2cf5166-5c36-4301-80c2-774c31060096" path="/var/lib/kubelet/pods/a2cf5166-5c36-4301-80c2-774c31060096/volumes" Feb 26 22:30:08 crc kubenswrapper[4910]: I0226 22:30:08.902424 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:30:09 crc kubenswrapper[4910]: I0226 22:30:09.208518 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"82909def5d89987cafbb718608b443b77e385492453e4fd861b04d0417660d57"} Feb 26 22:30:18 crc kubenswrapper[4910]: I0226 22:30:18.336229 4910 generic.go:334] "Generic (PLEG): container finished" podID="b6563cac-bed3-4ce6-a7d3-d6ed8e53a195" containerID="8fbe156842c08488a45d15ac360f7d20ccd3b2dd3678dbd0e695e8acf3dc9a2b" exitCode=0 Feb 26 22:30:18 crc kubenswrapper[4910]: I0226 22:30:18.336340 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" event={"ID":"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195","Type":"ContainerDied","Data":"8fbe156842c08488a45d15ac360f7d20ccd3b2dd3678dbd0e695e8acf3dc9a2b"} Feb 26 22:30:19 crc kubenswrapper[4910]: I0226 22:30:19.879112 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:30:19 crc kubenswrapper[4910]: I0226 22:30:19.993632 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zhtq\" (UniqueName: \"kubernetes.io/projected/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-kube-api-access-7zhtq\") pod \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " Feb 26 22:30:19 crc kubenswrapper[4910]: I0226 22:30:19.994178 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-ssh-key-openstack-edpm-ipam\") pod \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " Feb 26 22:30:19 crc kubenswrapper[4910]: I0226 22:30:19.994344 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-inventory\") pod \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\" (UID: \"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195\") " Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.000683 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-kube-api-access-7zhtq" (OuterVolumeSpecName: "kube-api-access-7zhtq") pod "b6563cac-bed3-4ce6-a7d3-d6ed8e53a195" (UID: "b6563cac-bed3-4ce6-a7d3-d6ed8e53a195"). InnerVolumeSpecName "kube-api-access-7zhtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.053726 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b6563cac-bed3-4ce6-a7d3-d6ed8e53a195" (UID: "b6563cac-bed3-4ce6-a7d3-d6ed8e53a195"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.053793 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sdhx"] Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.063480 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sdhx"] Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.072839 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-inventory" (OuterVolumeSpecName: "inventory") pod "b6563cac-bed3-4ce6-a7d3-d6ed8e53a195" (UID: "b6563cac-bed3-4ce6-a7d3-d6ed8e53a195"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.096924 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.096954 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.096963 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zhtq\" (UniqueName: \"kubernetes.io/projected/b6563cac-bed3-4ce6-a7d3-d6ed8e53a195-kube-api-access-7zhtq\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.360973 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" event={"ID":"b6563cac-bed3-4ce6-a7d3-d6ed8e53a195","Type":"ContainerDied","Data":"422bb18ff9caa18d251f46384e1f1506baa6f4c44ad43c2415082efb2eb310ac"} Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.361444 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="422bb18ff9caa18d251f46384e1f1506baa6f4c44ad43c2415082efb2eb310ac" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.361041 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.468710 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r4srj"] Feb 26 22:30:20 crc kubenswrapper[4910]: E0226 22:30:20.469174 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf549ee0-b6a6-4060-8dfe-2477749974d8" containerName="oc" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.469192 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf549ee0-b6a6-4060-8dfe-2477749974d8" containerName="oc" Feb 26 22:30:20 crc kubenswrapper[4910]: E0226 22:30:20.469211 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701e4529-3817-4c3d-a6f6-f9ffc4fccf7a" containerName="collect-profiles" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.469218 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="701e4529-3817-4c3d-a6f6-f9ffc4fccf7a" containerName="collect-profiles" Feb 26 22:30:20 crc kubenswrapper[4910]: E0226 22:30:20.469241 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6563cac-bed3-4ce6-a7d3-d6ed8e53a195" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.469250 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6563cac-bed3-4ce6-a7d3-d6ed8e53a195" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.469426 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="701e4529-3817-4c3d-a6f6-f9ffc4fccf7a" containerName="collect-profiles" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.469449 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6563cac-bed3-4ce6-a7d3-d6ed8e53a195" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.469458 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf549ee0-b6a6-4060-8dfe-2477749974d8" containerName="oc" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.470233 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.473738 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.476359 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.476686 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.477475 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.479335 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r4srj"] Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.607091 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjpr\" (UniqueName: \"kubernetes.io/projected/209a88c5-ba73-4b03-a77e-404c240f33ac-kube-api-access-kvjpr\") pod \"ssh-known-hosts-edpm-deployment-r4srj\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.607149 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r4srj\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.607207 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r4srj\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.709391 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvjpr\" (UniqueName: \"kubernetes.io/projected/209a88c5-ba73-4b03-a77e-404c240f33ac-kube-api-access-kvjpr\") pod \"ssh-known-hosts-edpm-deployment-r4srj\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.709511 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r4srj\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.709588 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r4srj\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.714206 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r4srj\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.714604 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r4srj\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.731645 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvjpr\" (UniqueName: \"kubernetes.io/projected/209a88c5-ba73-4b03-a77e-404c240f33ac-kube-api-access-kvjpr\") pod \"ssh-known-hosts-edpm-deployment-r4srj\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:20 crc kubenswrapper[4910]: I0226 22:30:20.803082 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:21 crc kubenswrapper[4910]: I0226 22:30:21.345910 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r4srj"] Feb 26 22:30:21 crc kubenswrapper[4910]: I0226 22:30:21.370920 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" event={"ID":"209a88c5-ba73-4b03-a77e-404c240f33ac","Type":"ContainerStarted","Data":"689529a6a9f8c48128fb326d5b57a1a3c83f912a2ac1b4920426c5c8d9ea4858"} Feb 26 22:30:21 crc kubenswrapper[4910]: I0226 22:30:21.918144 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98380662-1912-42a4-bf57-f5249871c687" path="/var/lib/kubelet/pods/98380662-1912-42a4-bf57-f5249871c687/volumes" Feb 26 22:30:22 crc kubenswrapper[4910]: I0226 22:30:22.432208 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" event={"ID":"209a88c5-ba73-4b03-a77e-404c240f33ac","Type":"ContainerStarted","Data":"18d9c86b349179157b4e2676ed3c3f5799decaae360d1a3343e6d560de52a16a"} Feb 26 22:30:22 crc kubenswrapper[4910]: I0226 22:30:22.453660 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" podStartSLOduration=2.026259525 podStartE2EDuration="2.453618881s" podCreationTimestamp="2026-02-26 22:30:20 +0000 UTC" firstStartedPulling="2026-02-26 22:30:21.349125985 +0000 UTC m=+2106.428616526" lastFinishedPulling="2026-02-26 22:30:21.776485311 +0000 UTC m=+2106.855975882" observedRunningTime="2026-02-26 22:30:22.447244708 +0000 UTC m=+2107.526735289" watchObservedRunningTime="2026-02-26 22:30:22.453618881 +0000 UTC m=+2107.533109422" Feb 26 22:30:29 crc kubenswrapper[4910]: I0226 22:30:29.510263 4910 generic.go:334] "Generic (PLEG): container finished" podID="209a88c5-ba73-4b03-a77e-404c240f33ac" containerID="18d9c86b349179157b4e2676ed3c3f5799decaae360d1a3343e6d560de52a16a" exitCode=0 Feb 26 22:30:29 crc kubenswrapper[4910]: I0226 22:30:29.510387 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" event={"ID":"209a88c5-ba73-4b03-a77e-404c240f33ac","Type":"ContainerDied","Data":"18d9c86b349179157b4e2676ed3c3f5799decaae360d1a3343e6d560de52a16a"} Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.079264 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.205827 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-ssh-key-openstack-edpm-ipam\") pod \"209a88c5-ba73-4b03-a77e-404c240f33ac\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.205939 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-inventory-0\") pod \"209a88c5-ba73-4b03-a77e-404c240f33ac\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.206130 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvjpr\" (UniqueName: \"kubernetes.io/projected/209a88c5-ba73-4b03-a77e-404c240f33ac-kube-api-access-kvjpr\") pod \"209a88c5-ba73-4b03-a77e-404c240f33ac\" (UID: \"209a88c5-ba73-4b03-a77e-404c240f33ac\") " Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.212458 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209a88c5-ba73-4b03-a77e-404c240f33ac-kube-api-access-kvjpr" (OuterVolumeSpecName: "kube-api-access-kvjpr") pod "209a88c5-ba73-4b03-a77e-404c240f33ac" (UID: "209a88c5-ba73-4b03-a77e-404c240f33ac"). InnerVolumeSpecName "kube-api-access-kvjpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.259292 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "209a88c5-ba73-4b03-a77e-404c240f33ac" (UID: "209a88c5-ba73-4b03-a77e-404c240f33ac"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.293326 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "209a88c5-ba73-4b03-a77e-404c240f33ac" (UID: "209a88c5-ba73-4b03-a77e-404c240f33ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.311762 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.311802 4910 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/209a88c5-ba73-4b03-a77e-404c240f33ac-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.311811 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvjpr\" (UniqueName: \"kubernetes.io/projected/209a88c5-ba73-4b03-a77e-404c240f33ac-kube-api-access-kvjpr\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.532667 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" event={"ID":"209a88c5-ba73-4b03-a77e-404c240f33ac","Type":"ContainerDied","Data":"689529a6a9f8c48128fb326d5b57a1a3c83f912a2ac1b4920426c5c8d9ea4858"} Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.532705 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689529a6a9f8c48128fb326d5b57a1a3c83f912a2ac1b4920426c5c8d9ea4858" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.532752 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r4srj" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.622703 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw"] Feb 26 22:30:31 crc kubenswrapper[4910]: E0226 22:30:31.627221 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="209a88c5-ba73-4b03-a77e-404c240f33ac" containerName="ssh-known-hosts-edpm-deployment" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.627257 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="209a88c5-ba73-4b03-a77e-404c240f33ac" containerName="ssh-known-hosts-edpm-deployment" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.627601 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="209a88c5-ba73-4b03-a77e-404c240f33ac" containerName="ssh-known-hosts-edpm-deployment" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.628531 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.637804 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw"] Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.667839 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.668077 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.668280 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.668431 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.824466 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw5nz\" (UniqueName: \"kubernetes.io/projected/29e090ee-08ec-4056-9ae1-6be8b692a15f-kube-api-access-fw5nz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5klzw\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.825005 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5klzw\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.825221 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5klzw\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.927694 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw5nz\" (UniqueName: \"kubernetes.io/projected/29e090ee-08ec-4056-9ae1-6be8b692a15f-kube-api-access-fw5nz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5klzw\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.927817 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5klzw\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.927900 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5klzw\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.934690 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5klzw\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.935737 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5klzw\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.948618 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw5nz\" (UniqueName: \"kubernetes.io/projected/29e090ee-08ec-4056-9ae1-6be8b692a15f-kube-api-access-fw5nz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5klzw\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:31 crc kubenswrapper[4910]: I0226 22:30:31.999112 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:32 crc kubenswrapper[4910]: I0226 22:30:32.587148 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw"] Feb 26 22:30:33 crc kubenswrapper[4910]: I0226 22:30:33.564661 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" event={"ID":"29e090ee-08ec-4056-9ae1-6be8b692a15f","Type":"ContainerStarted","Data":"8439d873ade074719f6b01dc9dc4a3e1937a6ea6d98bf86d8164e052e25d609c"} Feb 26 22:30:33 crc kubenswrapper[4910]: I0226 22:30:33.564977 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" event={"ID":"29e090ee-08ec-4056-9ae1-6be8b692a15f","Type":"ContainerStarted","Data":"641c7ba5a63754f69d13f1a5db066f8b3cffff52283d21e9a450c0d8ebf0abf6"} Feb 26 22:30:33 crc kubenswrapper[4910]: I0226 22:30:33.599779 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" podStartSLOduration=2.157649156 podStartE2EDuration="2.599759533s" podCreationTimestamp="2026-02-26 22:30:31 +0000 UTC" firstStartedPulling="2026-02-26 22:30:32.589191973 +0000 UTC m=+2117.668682514" lastFinishedPulling="2026-02-26 22:30:33.03130234 +0000 UTC m=+2118.110792891" observedRunningTime="2026-02-26 22:30:33.58753309 +0000 UTC m=+2118.667023651" watchObservedRunningTime="2026-02-26 22:30:33.599759533 +0000 UTC m=+2118.679250084" Feb 26 22:30:34 crc kubenswrapper[4910]: I0226 22:30:34.340139 4910 scope.go:117] "RemoveContainer" containerID="9294495cf28254d4ff512a0e6d5b15d7b0824e374f5984dd0a2819016cf443ab" Feb 26 22:30:34 crc kubenswrapper[4910]: I0226 22:30:34.407656 4910 scope.go:117] "RemoveContainer" containerID="0f7481aa2295319bdc4e7f4799721d0afb5711dc6afdd102e63ed4f6e9c0471b" Feb 26 22:30:34 crc kubenswrapper[4910]: I0226 22:30:34.516522 4910 scope.go:117] "RemoveContainer" containerID="aa2749381068e81de07ab37b39cc64029eea43a92baf9050bae50aa981b7eb1c" Feb 26 22:30:34 crc kubenswrapper[4910]: I0226 22:30:34.550109 4910 scope.go:117] "RemoveContainer" containerID="3d97fe51018179b31bccf7312461218857324cbb63ea1d435237cd969b07b806" Feb 26 22:30:34 crc kubenswrapper[4910]: I0226 22:30:34.627870 4910 scope.go:117] "RemoveContainer" containerID="6b76c09195c50e7f6e6e1e22e58ea9c9ffe800a09cf1651e6e1deb06c4b9843a" Feb 26 22:30:41 crc kubenswrapper[4910]: I0226 22:30:41.660202 4910 generic.go:334] "Generic (PLEG): container finished" podID="29e090ee-08ec-4056-9ae1-6be8b692a15f" containerID="8439d873ade074719f6b01dc9dc4a3e1937a6ea6d98bf86d8164e052e25d609c" exitCode=0 Feb 26 22:30:41 crc kubenswrapper[4910]: I0226 22:30:41.660700 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" event={"ID":"29e090ee-08ec-4056-9ae1-6be8b692a15f","Type":"ContainerDied","Data":"8439d873ade074719f6b01dc9dc4a3e1937a6ea6d98bf86d8164e052e25d609c"} Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.268601 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.328170 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-inventory\") pod \"29e090ee-08ec-4056-9ae1-6be8b692a15f\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.328237 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-ssh-key-openstack-edpm-ipam\") pod \"29e090ee-08ec-4056-9ae1-6be8b692a15f\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.328353 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw5nz\" (UniqueName: \"kubernetes.io/projected/29e090ee-08ec-4056-9ae1-6be8b692a15f-kube-api-access-fw5nz\") pod \"29e090ee-08ec-4056-9ae1-6be8b692a15f\" (UID: \"29e090ee-08ec-4056-9ae1-6be8b692a15f\") " Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.340513 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e090ee-08ec-4056-9ae1-6be8b692a15f-kube-api-access-fw5nz" (OuterVolumeSpecName: "kube-api-access-fw5nz") pod "29e090ee-08ec-4056-9ae1-6be8b692a15f" (UID: "29e090ee-08ec-4056-9ae1-6be8b692a15f"). InnerVolumeSpecName "kube-api-access-fw5nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.366383 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-inventory" (OuterVolumeSpecName: "inventory") pod "29e090ee-08ec-4056-9ae1-6be8b692a15f" (UID: "29e090ee-08ec-4056-9ae1-6be8b692a15f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.376326 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "29e090ee-08ec-4056-9ae1-6be8b692a15f" (UID: "29e090ee-08ec-4056-9ae1-6be8b692a15f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.431040 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw5nz\" (UniqueName: \"kubernetes.io/projected/29e090ee-08ec-4056-9ae1-6be8b692a15f-kube-api-access-fw5nz\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.431080 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.431090 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29e090ee-08ec-4056-9ae1-6be8b692a15f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.682623 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" event={"ID":"29e090ee-08ec-4056-9ae1-6be8b692a15f","Type":"ContainerDied","Data":"641c7ba5a63754f69d13f1a5db066f8b3cffff52283d21e9a450c0d8ebf0abf6"} Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.682677 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="641c7ba5a63754f69d13f1a5db066f8b3cffff52283d21e9a450c0d8ebf0abf6" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.682753 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5klzw" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.750212 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2"] Feb 26 22:30:43 crc kubenswrapper[4910]: E0226 22:30:43.751029 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e090ee-08ec-4056-9ae1-6be8b692a15f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.751096 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e090ee-08ec-4056-9ae1-6be8b692a15f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.751648 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e090ee-08ec-4056-9ae1-6be8b692a15f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.753331 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.758494 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.758687 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.758902 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.758927 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.761931 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2"] Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.941101 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7xrg\" (UniqueName: \"kubernetes.io/projected/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-kube-api-access-x7xrg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.941551 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:43 crc kubenswrapper[4910]: I0226 22:30:43.941664 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:44 crc kubenswrapper[4910]: I0226 22:30:44.045218 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7xrg\" (UniqueName: \"kubernetes.io/projected/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-kube-api-access-x7xrg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:44 crc kubenswrapper[4910]: I0226 22:30:44.045288 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:44 crc kubenswrapper[4910]: I0226 22:30:44.046079 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:44 crc kubenswrapper[4910]: I0226 22:30:44.051301 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:44 crc kubenswrapper[4910]: I0226 22:30:44.052057 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:44 crc kubenswrapper[4910]: I0226 22:30:44.073627 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7xrg\" (UniqueName: \"kubernetes.io/projected/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-kube-api-access-x7xrg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:44 crc kubenswrapper[4910]: I0226 22:30:44.091457 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:44.720758 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2"] Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:44.916496 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7kpt6"] Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:44.921421 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:44.935358 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kpt6"] Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.065939 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-catalog-content\") pod \"redhat-operators-7kpt6\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.066406 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-utilities\") pod \"redhat-operators-7kpt6\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.067272 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrnff\" (UniqueName: \"kubernetes.io/projected/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-kube-api-access-jrnff\") pod \"redhat-operators-7kpt6\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.170145 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrnff\" (UniqueName: \"kubernetes.io/projected/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-kube-api-access-jrnff\") pod \"redhat-operators-7kpt6\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.170246 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-catalog-content\") pod \"redhat-operators-7kpt6\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.170386 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-utilities\") pod \"redhat-operators-7kpt6\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.171138 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-utilities\") pod \"redhat-operators-7kpt6\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.171738 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-catalog-content\") pod \"redhat-operators-7kpt6\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.193700 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrnff\" (UniqueName: \"kubernetes.io/projected/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-kube-api-access-jrnff\") pod \"redhat-operators-7kpt6\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.237801 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.704808 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" event={"ID":"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c","Type":"ContainerStarted","Data":"c99566f54f078f5ba5c563400a154a77c807db2ed5ef3ee5ee2b21626a44cf84"} Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.705095 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" event={"ID":"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c","Type":"ContainerStarted","Data":"bf40feb51c62ccf58f8913ebf29288b704e27eef5ee8d7672fd83736fdba060a"} Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.769796 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" podStartSLOduration=2.355480757 podStartE2EDuration="2.769778077s" podCreationTimestamp="2026-02-26 22:30:43 +0000 UTC" firstStartedPulling="2026-02-26 22:30:44.72181882 +0000 UTC m=+2129.801309371" lastFinishedPulling="2026-02-26 22:30:45.13611611 +0000 UTC m=+2130.215606691" observedRunningTime="2026-02-26 22:30:45.72689262 +0000 UTC m=+2130.806383161" watchObservedRunningTime="2026-02-26 22:30:45.769778077 +0000 UTC m=+2130.849268618" Feb 26 22:30:45 crc kubenswrapper[4910]: W0226 22:30:45.775783 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559366ce_d49c_4eb4_b3ce_d0968ab4f2ee.slice/crio-124571ea0ea73c9ae54e64785c764fa47cfa1e83de0db60be7550f56ee9d0e40 WatchSource:0}: Error finding container 124571ea0ea73c9ae54e64785c764fa47cfa1e83de0db60be7550f56ee9d0e40: Status 404 returned error can't find the container with id 124571ea0ea73c9ae54e64785c764fa47cfa1e83de0db60be7550f56ee9d0e40 Feb 26 22:30:45 crc kubenswrapper[4910]: I0226 22:30:45.784275 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kpt6"] Feb 26 22:30:46 crc kubenswrapper[4910]: I0226 22:30:46.718984 4910 generic.go:334] "Generic (PLEG): container finished" podID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerID="2583d96d0f0a8bb5591b2e5be6e251ea9c580301203470609e5cf7259ca952fe" exitCode=0 Feb 26 22:30:46 crc kubenswrapper[4910]: I0226 22:30:46.719369 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpt6" event={"ID":"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee","Type":"ContainerDied","Data":"2583d96d0f0a8bb5591b2e5be6e251ea9c580301203470609e5cf7259ca952fe"} Feb 26 22:30:46 crc kubenswrapper[4910]: I0226 22:30:46.719675 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpt6" event={"ID":"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee","Type":"ContainerStarted","Data":"124571ea0ea73c9ae54e64785c764fa47cfa1e83de0db60be7550f56ee9d0e40"} Feb 26 22:30:47 crc kubenswrapper[4910]: I0226 22:30:47.730610 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpt6" event={"ID":"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee","Type":"ContainerStarted","Data":"8f2b9b12ad91670e95f5090f2f27c46cce524be1ec91873988cac024d4b70096"} Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.305713 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2n9kt"] Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.311285 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.333675 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2n9kt"] Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.499272 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-catalog-content\") pod \"community-operators-2n9kt\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.499402 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzb84\" (UniqueName: \"kubernetes.io/projected/653e622c-3e49-4712-b9f1-b1b099bf328e-kube-api-access-qzb84\") pod \"community-operators-2n9kt\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.499429 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-utilities\") pod \"community-operators-2n9kt\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.601933 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-catalog-content\") pod \"community-operators-2n9kt\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.602099 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzb84\" (UniqueName: \"kubernetes.io/projected/653e622c-3e49-4712-b9f1-b1b099bf328e-kube-api-access-qzb84\") pod \"community-operators-2n9kt\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.602141 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-utilities\") pod \"community-operators-2n9kt\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.602495 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-catalog-content\") pod \"community-operators-2n9kt\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.602558 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-utilities\") pod \"community-operators-2n9kt\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.624295 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzb84\" (UniqueName: \"kubernetes.io/projected/653e622c-3e49-4712-b9f1-b1b099bf328e-kube-api-access-qzb84\") pod \"community-operators-2n9kt\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:48 crc kubenswrapper[4910]: I0226 22:30:48.636072 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:49 crc kubenswrapper[4910]: W0226 22:30:49.146204 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod653e622c_3e49_4712_b9f1_b1b099bf328e.slice/crio-d98bcbd0e72df8b89e4c5cb25c813bc76db72c6f33471af2f7f6d82a0a7a5970 WatchSource:0}: Error finding container d98bcbd0e72df8b89e4c5cb25c813bc76db72c6f33471af2f7f6d82a0a7a5970: Status 404 returned error can't find the container with id d98bcbd0e72df8b89e4c5cb25c813bc76db72c6f33471af2f7f6d82a0a7a5970 Feb 26 22:30:49 crc kubenswrapper[4910]: I0226 22:30:49.150281 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2n9kt"] Feb 26 22:30:49 crc kubenswrapper[4910]: I0226 22:30:49.760437 4910 generic.go:334] "Generic (PLEG): container finished" podID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerID="9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5" exitCode=0 Feb 26 22:30:49 crc kubenswrapper[4910]: I0226 22:30:49.760541 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n9kt" event={"ID":"653e622c-3e49-4712-b9f1-b1b099bf328e","Type":"ContainerDied","Data":"9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5"} Feb 26 22:30:49 crc kubenswrapper[4910]: I0226 22:30:49.760932 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n9kt" event={"ID":"653e622c-3e49-4712-b9f1-b1b099bf328e","Type":"ContainerStarted","Data":"d98bcbd0e72df8b89e4c5cb25c813bc76db72c6f33471af2f7f6d82a0a7a5970"} Feb 26 22:30:51 crc kubenswrapper[4910]: I0226 22:30:51.785148 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n9kt" event={"ID":"653e622c-3e49-4712-b9f1-b1b099bf328e","Type":"ContainerStarted","Data":"452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616"} Feb 26 22:30:54 crc kubenswrapper[4910]: I0226 22:30:54.821574 4910 generic.go:334] "Generic (PLEG): container finished" podID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerID="452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616" exitCode=0 Feb 26 22:30:54 crc kubenswrapper[4910]: I0226 22:30:54.821670 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n9kt" event={"ID":"653e622c-3e49-4712-b9f1-b1b099bf328e","Type":"ContainerDied","Data":"452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616"} Feb 26 22:30:55 crc kubenswrapper[4910]: I0226 22:30:55.832110 4910 generic.go:334] "Generic (PLEG): container finished" podID="70ef3159-37b2-41f3-bfa6-90a6dcbfd17c" containerID="c99566f54f078f5ba5c563400a154a77c807db2ed5ef3ee5ee2b21626a44cf84" exitCode=0 Feb 26 22:30:55 crc kubenswrapper[4910]: I0226 22:30:55.832213 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" event={"ID":"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c","Type":"ContainerDied","Data":"c99566f54f078f5ba5c563400a154a77c807db2ed5ef3ee5ee2b21626a44cf84"} Feb 26 22:30:55 crc kubenswrapper[4910]: I0226 22:30:55.834857 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n9kt" event={"ID":"653e622c-3e49-4712-b9f1-b1b099bf328e","Type":"ContainerStarted","Data":"93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b"} Feb 26 22:30:55 crc kubenswrapper[4910]: I0226 22:30:55.836635 4910 generic.go:334] "Generic (PLEG): container finished" podID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerID="8f2b9b12ad91670e95f5090f2f27c46cce524be1ec91873988cac024d4b70096" exitCode=0 Feb 26 22:30:55 crc kubenswrapper[4910]: I0226 22:30:55.836670 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpt6" event={"ID":"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee","Type":"ContainerDied","Data":"8f2b9b12ad91670e95f5090f2f27c46cce524be1ec91873988cac024d4b70096"} Feb 26 22:30:55 crc kubenswrapper[4910]: I0226 22:30:55.907378 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2n9kt" podStartSLOduration=2.464201975 podStartE2EDuration="7.907351013s" podCreationTimestamp="2026-02-26 22:30:48 +0000 UTC" firstStartedPulling="2026-02-26 22:30:49.763824064 +0000 UTC m=+2134.843314615" lastFinishedPulling="2026-02-26 22:30:55.206973112 +0000 UTC m=+2140.286463653" observedRunningTime="2026-02-26 22:30:55.893824375 +0000 UTC m=+2140.973314936" watchObservedRunningTime="2026-02-26 22:30:55.907351013 +0000 UTC m=+2140.986841564" Feb 26 22:30:56 crc kubenswrapper[4910]: I0226 22:30:56.850512 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpt6" event={"ID":"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee","Type":"ContainerStarted","Data":"cd0737e368ccf8f6a5d186b4bf44e7226b2ad4d8414e9927fe702568aadac08d"} Feb 26 22:30:56 crc kubenswrapper[4910]: I0226 22:30:56.870442 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7kpt6" podStartSLOduration=3.26644394 podStartE2EDuration="12.870419004s" podCreationTimestamp="2026-02-26 22:30:44 +0000 UTC" firstStartedPulling="2026-02-26 22:30:46.721898287 +0000 UTC m=+2131.801388848" lastFinishedPulling="2026-02-26 22:30:56.325873371 +0000 UTC m=+2141.405363912" observedRunningTime="2026-02-26 22:30:56.867182196 +0000 UTC m=+2141.946672747" watchObservedRunningTime="2026-02-26 22:30:56.870419004 +0000 UTC m=+2141.949909535" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.395562 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.509199 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-ssh-key-openstack-edpm-ipam\") pod \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.509765 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-inventory\") pod \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.510193 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7xrg\" (UniqueName: \"kubernetes.io/projected/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-kube-api-access-x7xrg\") pod \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.523422 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-kube-api-access-x7xrg" (OuterVolumeSpecName: "kube-api-access-x7xrg") pod "70ef3159-37b2-41f3-bfa6-90a6dcbfd17c" (UID: "70ef3159-37b2-41f3-bfa6-90a6dcbfd17c"). InnerVolumeSpecName "kube-api-access-x7xrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.543940 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70ef3159-37b2-41f3-bfa6-90a6dcbfd17c" (UID: "70ef3159-37b2-41f3-bfa6-90a6dcbfd17c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.611789 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-inventory" (OuterVolumeSpecName: "inventory") pod "70ef3159-37b2-41f3-bfa6-90a6dcbfd17c" (UID: "70ef3159-37b2-41f3-bfa6-90a6dcbfd17c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.612001 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-inventory\") pod \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\" (UID: \"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c\") " Feb 26 22:30:57 crc kubenswrapper[4910]: W0226 22:30:57.612211 4910 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c/volumes/kubernetes.io~secret/inventory Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.612234 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-inventory" (OuterVolumeSpecName: "inventory") pod "70ef3159-37b2-41f3-bfa6-90a6dcbfd17c" (UID: "70ef3159-37b2-41f3-bfa6-90a6dcbfd17c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.612513 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7xrg\" (UniqueName: \"kubernetes.io/projected/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-kube-api-access-x7xrg\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.612533 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.612544 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70ef3159-37b2-41f3-bfa6-90a6dcbfd17c-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.864598 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" event={"ID":"70ef3159-37b2-41f3-bfa6-90a6dcbfd17c","Type":"ContainerDied","Data":"bf40feb51c62ccf58f8913ebf29288b704e27eef5ee8d7672fd83736fdba060a"} Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.864997 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf40feb51c62ccf58f8913ebf29288b704e27eef5ee8d7672fd83736fdba060a" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.864665 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.976195 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq"] Feb 26 22:30:57 crc kubenswrapper[4910]: E0226 22:30:57.976782 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ef3159-37b2-41f3-bfa6-90a6dcbfd17c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.976805 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ef3159-37b2-41f3-bfa6-90a6dcbfd17c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.977063 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ef3159-37b2-41f3-bfa6-90a6dcbfd17c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.978013 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.980759 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.980871 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.980971 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.981300 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.981485 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.981578 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.981665 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.982692 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:30:57 crc kubenswrapper[4910]: I0226 22:30:57.994124 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq"] Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.122250 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.122505 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.122647 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.122879 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.122991 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfjj\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-kube-api-access-xwfjj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.123064 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.123122 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.123218 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.123272 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.123462 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.123551 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.123655 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.123893 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.123946 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.225900 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.225943 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.225980 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226042 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226060 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226091 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226131 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226208 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226235 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226258 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfjj\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-kube-api-access-xwfjj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226278 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226299 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226314 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.226331 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.231868 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.231894 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.232042 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.233492 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.234329 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.235025 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.243862 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.248867 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.249207 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.249699 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.250421 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.254327 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.254899 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfjj\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-kube-api-access-xwfjj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.257518 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.293206 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.636403 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.636650 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:30:58 crc kubenswrapper[4910]: W0226 22:30:58.658733 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76c54bfb_1dab_4654_b0d2_dca09154f2b0.slice/crio-dadd2b01db09c152bbf16ede6307452ce09e7e3d082b0d7b9bdf0842028cc2d5 WatchSource:0}: Error finding container dadd2b01db09c152bbf16ede6307452ce09e7e3d082b0d7b9bdf0842028cc2d5: Status 404 returned error can't find the container with id dadd2b01db09c152bbf16ede6307452ce09e7e3d082b0d7b9bdf0842028cc2d5 Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.671998 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq"] Feb 26 22:30:58 crc kubenswrapper[4910]: I0226 22:30:58.886238 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" event={"ID":"76c54bfb-1dab-4654-b0d2-dca09154f2b0","Type":"ContainerStarted","Data":"dadd2b01db09c152bbf16ede6307452ce09e7e3d082b0d7b9bdf0842028cc2d5"} Feb 26 22:30:59 crc kubenswrapper[4910]: I0226 22:30:59.700031 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2n9kt" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerName="registry-server" probeResult="failure" output=< Feb 26 22:30:59 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:30:59 crc kubenswrapper[4910]: > Feb 26 22:30:59 crc kubenswrapper[4910]: I0226 22:30:59.898893 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" event={"ID":"76c54bfb-1dab-4654-b0d2-dca09154f2b0","Type":"ContainerStarted","Data":"9cb2f3acac4e4fe97f39f4fe5531910a109e5169b89471a0c2b33bafdc049892"} Feb 26 22:30:59 crc kubenswrapper[4910]: I0226 22:30:59.933385 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" podStartSLOduration=2.49845675 podStartE2EDuration="2.933362804s" podCreationTimestamp="2026-02-26 22:30:57 +0000 UTC" firstStartedPulling="2026-02-26 22:30:58.663303715 +0000 UTC m=+2143.742794256" lastFinishedPulling="2026-02-26 22:30:59.098209759 +0000 UTC m=+2144.177700310" observedRunningTime="2026-02-26 22:30:59.920650027 +0000 UTC m=+2145.000140588" watchObservedRunningTime="2026-02-26 22:30:59.933362804 +0000 UTC m=+2145.012853355" Feb 26 22:31:05 crc kubenswrapper[4910]: I0226 22:31:05.238246 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:31:05 crc kubenswrapper[4910]: I0226 22:31:05.238734 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:31:06 crc kubenswrapper[4910]: I0226 22:31:06.044287 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-htpl5"] Feb 26 22:31:06 crc kubenswrapper[4910]: I0226 22:31:06.058666 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-htpl5"] Feb 26 22:31:06 crc kubenswrapper[4910]: I0226 22:31:06.290970 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7kpt6" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="registry-server" probeResult="failure" output=< Feb 26 22:31:06 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:31:06 crc kubenswrapper[4910]: > Feb 26 22:31:07 crc kubenswrapper[4910]: I0226 22:31:07.933636 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97" path="/var/lib/kubelet/pods/5b756cd7-feb7-4491-b5a8-2b4a8ffe2d97/volumes" Feb 26 22:31:08 crc kubenswrapper[4910]: I0226 22:31:08.680578 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:31:08 crc kubenswrapper[4910]: I0226 22:31:08.727549 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:31:08 crc kubenswrapper[4910]: I0226 22:31:08.929008 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2n9kt"] Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.030701 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2n9kt" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerName="registry-server" containerID="cri-o://93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b" gracePeriod=2 Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.667335 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.783922 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-catalog-content\") pod \"653e622c-3e49-4712-b9f1-b1b099bf328e\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.783969 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-utilities\") pod \"653e622c-3e49-4712-b9f1-b1b099bf328e\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.784042 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzb84\" (UniqueName: \"kubernetes.io/projected/653e622c-3e49-4712-b9f1-b1b099bf328e-kube-api-access-qzb84\") pod \"653e622c-3e49-4712-b9f1-b1b099bf328e\" (UID: \"653e622c-3e49-4712-b9f1-b1b099bf328e\") " Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.784995 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-utilities" (OuterVolumeSpecName: "utilities") pod "653e622c-3e49-4712-b9f1-b1b099bf328e" (UID: "653e622c-3e49-4712-b9f1-b1b099bf328e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.794364 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653e622c-3e49-4712-b9f1-b1b099bf328e-kube-api-access-qzb84" (OuterVolumeSpecName: "kube-api-access-qzb84") pod "653e622c-3e49-4712-b9f1-b1b099bf328e" (UID: "653e622c-3e49-4712-b9f1-b1b099bf328e"). InnerVolumeSpecName "kube-api-access-qzb84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.868333 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "653e622c-3e49-4712-b9f1-b1b099bf328e" (UID: "653e622c-3e49-4712-b9f1-b1b099bf328e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.887268 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.887299 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653e622c-3e49-4712-b9f1-b1b099bf328e-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:10 crc kubenswrapper[4910]: I0226 22:31:10.887329 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzb84\" (UniqueName: \"kubernetes.io/projected/653e622c-3e49-4712-b9f1-b1b099bf328e-kube-api-access-qzb84\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.043650 4910 generic.go:334] "Generic (PLEG): container finished" podID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerID="93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b" exitCode=0 Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.043701 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n9kt" event={"ID":"653e622c-3e49-4712-b9f1-b1b099bf328e","Type":"ContainerDied","Data":"93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b"} Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.043733 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n9kt" event={"ID":"653e622c-3e49-4712-b9f1-b1b099bf328e","Type":"ContainerDied","Data":"d98bcbd0e72df8b89e4c5cb25c813bc76db72c6f33471af2f7f6d82a0a7a5970"} Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.043751 4910 scope.go:117] "RemoveContainer" containerID="93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.044335 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n9kt" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.084262 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2n9kt"] Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.088953 4910 scope.go:117] "RemoveContainer" containerID="452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.099411 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2n9kt"] Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.125690 4910 scope.go:117] "RemoveContainer" containerID="9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.164021 4910 scope.go:117] "RemoveContainer" containerID="93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b" Feb 26 22:31:11 crc kubenswrapper[4910]: E0226 22:31:11.164510 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b\": container with ID starting with 93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b not found: ID does not exist" containerID="93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.164554 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b"} err="failed to get container status \"93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b\": rpc error: code = NotFound desc = could not find container \"93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b\": container with ID starting with 93a9956675c62c7424112b5e5306cc171d886bb186388ea8c95bcb6f129ab13b not found: ID does not exist" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.164583 4910 scope.go:117] "RemoveContainer" containerID="452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616" Feb 26 22:31:11 crc kubenswrapper[4910]: E0226 22:31:11.166355 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616\": container with ID starting with 452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616 not found: ID does not exist" containerID="452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.166382 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616"} err="failed to get container status \"452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616\": rpc error: code = NotFound desc = could not find container \"452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616\": container with ID starting with 452091ae37a4d4cd08d0851512f60a87f027a747d257152861b33db36bf74616 not found: ID does not exist" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.166398 4910 scope.go:117] "RemoveContainer" containerID="9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5" Feb 26 22:31:11 crc kubenswrapper[4910]: E0226 22:31:11.166901 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5\": container with ID starting with 9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5 not found: ID does not exist" containerID="9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.166939 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5"} err="failed to get container status \"9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5\": rpc error: code = NotFound desc = could not find container \"9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5\": container with ID starting with 9c69c22d6f45c931a68fbc7554a70c48bffabbe3cb146b9bbb6e74f3a2d5cbf5 not found: ID does not exist" Feb 26 22:31:11 crc kubenswrapper[4910]: I0226 22:31:11.918567 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" path="/var/lib/kubelet/pods/653e622c-3e49-4712-b9f1-b1b099bf328e/volumes" Feb 26 22:31:13 crc kubenswrapper[4910]: I0226 22:31:13.029599 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-8sxfz"] Feb 26 22:31:13 crc kubenswrapper[4910]: I0226 22:31:13.047760 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-8sxfz"] Feb 26 22:31:13 crc kubenswrapper[4910]: I0226 22:31:13.916337 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72" path="/var/lib/kubelet/pods/3b9d10e2-14a8-43c3-9a21-fbd1ddf05a72/volumes" Feb 26 22:31:16 crc kubenswrapper[4910]: I0226 22:31:16.313008 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7kpt6" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="registry-server" probeResult="failure" output=< Feb 26 22:31:16 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:31:16 crc kubenswrapper[4910]: > Feb 26 22:31:25 crc kubenswrapper[4910]: I0226 22:31:25.317559 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:31:25 crc kubenswrapper[4910]: I0226 22:31:25.392020 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:31:28 crc kubenswrapper[4910]: I0226 22:31:28.929002 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kpt6"] Feb 26 22:31:28 crc kubenswrapper[4910]: I0226 22:31:28.929980 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7kpt6" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="registry-server" containerID="cri-o://cd0737e368ccf8f6a5d186b4bf44e7226b2ad4d8414e9927fe702568aadac08d" gracePeriod=2 Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.256005 4910 generic.go:334] "Generic (PLEG): container finished" podID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerID="cd0737e368ccf8f6a5d186b4bf44e7226b2ad4d8414e9927fe702568aadac08d" exitCode=0 Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.256051 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpt6" event={"ID":"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee","Type":"ContainerDied","Data":"cd0737e368ccf8f6a5d186b4bf44e7226b2ad4d8414e9927fe702568aadac08d"} Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.546020 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.709115 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-utilities\") pod \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.709373 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrnff\" (UniqueName: \"kubernetes.io/projected/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-kube-api-access-jrnff\") pod \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.709434 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-catalog-content\") pod \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\" (UID: \"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee\") " Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.710133 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-utilities" (OuterVolumeSpecName: "utilities") pod "559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" (UID: "559366ce-d49c-4eb4-b3ce-d0968ab4f2ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.714511 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-kube-api-access-jrnff" (OuterVolumeSpecName: "kube-api-access-jrnff") pod "559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" (UID: "559366ce-d49c-4eb4-b3ce-d0968ab4f2ee"). InnerVolumeSpecName "kube-api-access-jrnff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.816288 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrnff\" (UniqueName: \"kubernetes.io/projected/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-kube-api-access-jrnff\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.816317 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.845625 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" (UID: "559366ce-d49c-4eb4-b3ce-d0968ab4f2ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:31:29 crc kubenswrapper[4910]: I0226 22:31:29.918488 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:30 crc kubenswrapper[4910]: I0226 22:31:30.267089 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kpt6" event={"ID":"559366ce-d49c-4eb4-b3ce-d0968ab4f2ee","Type":"ContainerDied","Data":"124571ea0ea73c9ae54e64785c764fa47cfa1e83de0db60be7550f56ee9d0e40"} Feb 26 22:31:30 crc kubenswrapper[4910]: I0226 22:31:30.267224 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kpt6" Feb 26 22:31:30 crc kubenswrapper[4910]: I0226 22:31:30.267501 4910 scope.go:117] "RemoveContainer" containerID="cd0737e368ccf8f6a5d186b4bf44e7226b2ad4d8414e9927fe702568aadac08d" Feb 26 22:31:30 crc kubenswrapper[4910]: I0226 22:31:30.286395 4910 scope.go:117] "RemoveContainer" containerID="8f2b9b12ad91670e95f5090f2f27c46cce524be1ec91873988cac024d4b70096" Feb 26 22:31:30 crc kubenswrapper[4910]: I0226 22:31:30.296944 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kpt6"] Feb 26 22:31:30 crc kubenswrapper[4910]: I0226 22:31:30.311736 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7kpt6"] Feb 26 22:31:30 crc kubenswrapper[4910]: I0226 22:31:30.314730 4910 scope.go:117] "RemoveContainer" containerID="2583d96d0f0a8bb5591b2e5be6e251ea9c580301203470609e5cf7259ca952fe" Feb 26 22:31:31 crc kubenswrapper[4910]: I0226 22:31:31.913977 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" path="/var/lib/kubelet/pods/559366ce-d49c-4eb4-b3ce-d0968ab4f2ee/volumes" Feb 26 22:31:34 crc kubenswrapper[4910]: I0226 22:31:34.771863 4910 scope.go:117] "RemoveContainer" containerID="f397678c0b585151649acdd07d75770fdd5b84ce34b621a7827b679a9e2cb54f" Feb 26 22:31:34 crc kubenswrapper[4910]: I0226 22:31:34.810488 4910 scope.go:117] "RemoveContainer" containerID="cccf7e8913d66118eb539b24e2fb0e764ceb163bc1ca4dd517e793dd125b0b54" Feb 26 22:31:40 crc kubenswrapper[4910]: I0226 22:31:40.403002 4910 generic.go:334] "Generic (PLEG): container finished" podID="76c54bfb-1dab-4654-b0d2-dca09154f2b0" containerID="9cb2f3acac4e4fe97f39f4fe5531910a109e5169b89471a0c2b33bafdc049892" exitCode=0 Feb 26 22:31:40 crc kubenswrapper[4910]: I0226 22:31:40.403055 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" event={"ID":"76c54bfb-1dab-4654-b0d2-dca09154f2b0","Type":"ContainerDied","Data":"9cb2f3acac4e4fe97f39f4fe5531910a109e5169b89471a0c2b33bafdc049892"} Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.009525 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108292 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-bootstrap-combined-ca-bundle\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108383 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-inventory\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108416 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108467 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-neutron-metadata-combined-ca-bundle\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108504 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-telemetry-combined-ca-bundle\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108521 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-libvirt-combined-ca-bundle\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108553 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwfjj\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-kube-api-access-xwfjj\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108578 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-nova-combined-ca-bundle\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108600 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ssh-key-openstack-edpm-ipam\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108628 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ovn-combined-ca-bundle\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108678 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108709 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108768 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-repo-setup-combined-ca-bundle\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.108787 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\" (UID: \"76c54bfb-1dab-4654-b0d2-dca09154f2b0\") " Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.115549 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.115915 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.116153 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.116270 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.116874 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.121232 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.121758 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-kube-api-access-xwfjj" (OuterVolumeSpecName: "kube-api-access-xwfjj") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "kube-api-access-xwfjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.121880 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.123054 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.125146 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.125727 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.127400 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.164053 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-inventory" (OuterVolumeSpecName: "inventory") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.167114 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "76c54bfb-1dab-4654-b0d2-dca09154f2b0" (UID: "76c54bfb-1dab-4654-b0d2-dca09154f2b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211477 4910 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211530 4910 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211553 4910 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211574 4910 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211593 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211610 4910 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211628 4910 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211648 4910 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211666 4910 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211684 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwfjj\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-kube-api-access-xwfjj\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211701 4910 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211718 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211737 4910 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c54bfb-1dab-4654-b0d2-dca09154f2b0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.211755 4910 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76c54bfb-1dab-4654-b0d2-dca09154f2b0-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.424532 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" event={"ID":"76c54bfb-1dab-4654-b0d2-dca09154f2b0","Type":"ContainerDied","Data":"dadd2b01db09c152bbf16ede6307452ce09e7e3d082b0d7b9bdf0842028cc2d5"} Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.424567 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dadd2b01db09c152bbf16ede6307452ce09e7e3d082b0d7b9bdf0842028cc2d5" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.424595 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.543956 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6"] Feb 26 22:31:42 crc kubenswrapper[4910]: E0226 22:31:42.544348 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerName="extract-utilities" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544365 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerName="extract-utilities" Feb 26 22:31:42 crc kubenswrapper[4910]: E0226 22:31:42.544393 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerName="extract-content" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544399 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerName="extract-content" Feb 26 22:31:42 crc kubenswrapper[4910]: E0226 22:31:42.544413 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c54bfb-1dab-4654-b0d2-dca09154f2b0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544420 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c54bfb-1dab-4654-b0d2-dca09154f2b0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 22:31:42 crc kubenswrapper[4910]: E0226 22:31:42.544432 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerName="registry-server" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544440 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerName="registry-server" Feb 26 22:31:42 crc kubenswrapper[4910]: E0226 22:31:42.544448 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="extract-content" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544453 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="extract-content" Feb 26 22:31:42 crc kubenswrapper[4910]: E0226 22:31:42.544470 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="registry-server" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544475 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="registry-server" Feb 26 22:31:42 crc kubenswrapper[4910]: E0226 22:31:42.544487 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="extract-utilities" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544493 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="extract-utilities" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544664 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="559366ce-d49c-4eb4-b3ce-d0968ab4f2ee" containerName="registry-server" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544674 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="653e622c-3e49-4712-b9f1-b1b099bf328e" containerName="registry-server" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.544685 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c54bfb-1dab-4654-b0d2-dca09154f2b0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.545354 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.548311 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.548405 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.548531 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.549302 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.551329 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.571253 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6"] Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.619516 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.619625 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hth9z\" (UniqueName: \"kubernetes.io/projected/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-kube-api-access-hth9z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.619944 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.620236 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.620290 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.722630 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.722710 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.722762 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.722815 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hth9z\" (UniqueName: \"kubernetes.io/projected/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-kube-api-access-hth9z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.722894 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.724311 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.727222 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.729234 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.730204 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.743271 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hth9z\" (UniqueName: \"kubernetes.io/projected/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-kube-api-access-hth9z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x8xs6\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:42 crc kubenswrapper[4910]: I0226 22:31:42.864492 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:31:43 crc kubenswrapper[4910]: I0226 22:31:43.454237 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6"] Feb 26 22:31:44 crc kubenswrapper[4910]: I0226 22:31:44.450634 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" event={"ID":"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3","Type":"ContainerStarted","Data":"ee5eb42bd6a87983aa55ed38943ad18189e2a62729920aa51df63e7d22e84bac"} Feb 26 22:31:44 crc kubenswrapper[4910]: I0226 22:31:44.451027 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" event={"ID":"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3","Type":"ContainerStarted","Data":"b634939138efd247b9ce1f932bb67528eb370342799da173817143ec2e5da23a"} Feb 26 22:31:44 crc kubenswrapper[4910]: I0226 22:31:44.471470 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" podStartSLOduration=1.832374712 podStartE2EDuration="2.471448792s" podCreationTimestamp="2026-02-26 22:31:42 +0000 UTC" firstStartedPulling="2026-02-26 22:31:43.466462879 +0000 UTC m=+2188.545953420" lastFinishedPulling="2026-02-26 22:31:44.105536959 +0000 UTC m=+2189.185027500" observedRunningTime="2026-02-26 22:31:44.4691487 +0000 UTC m=+2189.548639251" watchObservedRunningTime="2026-02-26 22:31:44.471448792 +0000 UTC m=+2189.550939343" Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.139501 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535752-85w8v"] Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.142288 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535752-85w8v" Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.144534 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.147034 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.147536 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.163311 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535752-85w8v"] Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.254482 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gq6x\" (UniqueName: \"kubernetes.io/projected/4f5cf655-36d4-4e13-a14b-ac89a3ae680d-kube-api-access-4gq6x\") pod \"auto-csr-approver-29535752-85w8v\" (UID: \"4f5cf655-36d4-4e13-a14b-ac89a3ae680d\") " pod="openshift-infra/auto-csr-approver-29535752-85w8v" Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.356890 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gq6x\" (UniqueName: \"kubernetes.io/projected/4f5cf655-36d4-4e13-a14b-ac89a3ae680d-kube-api-access-4gq6x\") pod \"auto-csr-approver-29535752-85w8v\" (UID: \"4f5cf655-36d4-4e13-a14b-ac89a3ae680d\") " pod="openshift-infra/auto-csr-approver-29535752-85w8v" Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.376609 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gq6x\" (UniqueName: \"kubernetes.io/projected/4f5cf655-36d4-4e13-a14b-ac89a3ae680d-kube-api-access-4gq6x\") pod \"auto-csr-approver-29535752-85w8v\" (UID: \"4f5cf655-36d4-4e13-a14b-ac89a3ae680d\") " pod="openshift-infra/auto-csr-approver-29535752-85w8v" Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.465057 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535752-85w8v" Feb 26 22:32:00 crc kubenswrapper[4910]: I0226 22:32:00.993125 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535752-85w8v"] Feb 26 22:32:01 crc kubenswrapper[4910]: I0226 22:32:01.810500 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535752-85w8v" event={"ID":"4f5cf655-36d4-4e13-a14b-ac89a3ae680d","Type":"ContainerStarted","Data":"e27e0a1a9b7deea2b27f8401d7b764428fd82a30d52d9f2a09e9ae3e70c2226c"} Feb 26 22:32:02 crc kubenswrapper[4910]: I0226 22:32:02.825745 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535752-85w8v" event={"ID":"4f5cf655-36d4-4e13-a14b-ac89a3ae680d","Type":"ContainerStarted","Data":"7f68ccb2e0f00545afcf8e656638440bf442b5ff854ae3e13197746ffb00e5b1"} Feb 26 22:32:02 crc kubenswrapper[4910]: I0226 22:32:02.845931 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535752-85w8v" podStartSLOduration=1.5744422230000001 podStartE2EDuration="2.845913811s" podCreationTimestamp="2026-02-26 22:32:00 +0000 UTC" firstStartedPulling="2026-02-26 22:32:01.002486882 +0000 UTC m=+2206.081977433" lastFinishedPulling="2026-02-26 22:32:02.27395844 +0000 UTC m=+2207.353449021" observedRunningTime="2026-02-26 22:32:02.841348905 +0000 UTC m=+2207.920839456" watchObservedRunningTime="2026-02-26 22:32:02.845913811 +0000 UTC m=+2207.925404362" Feb 26 22:32:03 crc kubenswrapper[4910]: I0226 22:32:03.836901 4910 generic.go:334] "Generic (PLEG): container finished" podID="4f5cf655-36d4-4e13-a14b-ac89a3ae680d" containerID="7f68ccb2e0f00545afcf8e656638440bf442b5ff854ae3e13197746ffb00e5b1" exitCode=0 Feb 26 22:32:03 crc kubenswrapper[4910]: I0226 22:32:03.836964 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535752-85w8v" event={"ID":"4f5cf655-36d4-4e13-a14b-ac89a3ae680d","Type":"ContainerDied","Data":"7f68ccb2e0f00545afcf8e656638440bf442b5ff854ae3e13197746ffb00e5b1"} Feb 26 22:32:05 crc kubenswrapper[4910]: I0226 22:32:05.339396 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535752-85w8v" Feb 26 22:32:05 crc kubenswrapper[4910]: I0226 22:32:05.388382 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gq6x\" (UniqueName: \"kubernetes.io/projected/4f5cf655-36d4-4e13-a14b-ac89a3ae680d-kube-api-access-4gq6x\") pod \"4f5cf655-36d4-4e13-a14b-ac89a3ae680d\" (UID: \"4f5cf655-36d4-4e13-a14b-ac89a3ae680d\") " Feb 26 22:32:05 crc kubenswrapper[4910]: I0226 22:32:05.404480 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5cf655-36d4-4e13-a14b-ac89a3ae680d-kube-api-access-4gq6x" (OuterVolumeSpecName: "kube-api-access-4gq6x") pod "4f5cf655-36d4-4e13-a14b-ac89a3ae680d" (UID: "4f5cf655-36d4-4e13-a14b-ac89a3ae680d"). InnerVolumeSpecName "kube-api-access-4gq6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:32:05 crc kubenswrapper[4910]: I0226 22:32:05.490698 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gq6x\" (UniqueName: \"kubernetes.io/projected/4f5cf655-36d4-4e13-a14b-ac89a3ae680d-kube-api-access-4gq6x\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:05 crc kubenswrapper[4910]: I0226 22:32:05.868708 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535752-85w8v" event={"ID":"4f5cf655-36d4-4e13-a14b-ac89a3ae680d","Type":"ContainerDied","Data":"e27e0a1a9b7deea2b27f8401d7b764428fd82a30d52d9f2a09e9ae3e70c2226c"} Feb 26 22:32:05 crc kubenswrapper[4910]: I0226 22:32:05.869078 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27e0a1a9b7deea2b27f8401d7b764428fd82a30d52d9f2a09e9ae3e70c2226c" Feb 26 22:32:05 crc kubenswrapper[4910]: I0226 22:32:05.869204 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535752-85w8v" Feb 26 22:32:05 crc kubenswrapper[4910]: I0226 22:32:05.921665 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535746-j488b"] Feb 26 22:32:05 crc kubenswrapper[4910]: I0226 22:32:05.927440 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535746-j488b"] Feb 26 22:32:07 crc kubenswrapper[4910]: I0226 22:32:07.918330 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01704631-8b12-4b09-932f-fd922af31259" path="/var/lib/kubelet/pods/01704631-8b12-4b09-932f-fd922af31259/volumes" Feb 26 22:32:25 crc kubenswrapper[4910]: I0226 22:32:25.727695 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:32:25 crc kubenswrapper[4910]: I0226 22:32:25.728261 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:32:26 crc kubenswrapper[4910]: I0226 22:32:26.968236 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6pxft"] Feb 26 22:32:26 crc kubenswrapper[4910]: E0226 22:32:26.968824 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5cf655-36d4-4e13-a14b-ac89a3ae680d" containerName="oc" Feb 26 22:32:26 crc kubenswrapper[4910]: I0226 22:32:26.968834 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5cf655-36d4-4e13-a14b-ac89a3ae680d" containerName="oc" Feb 26 22:32:26 crc kubenswrapper[4910]: I0226 22:32:26.969032 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5cf655-36d4-4e13-a14b-ac89a3ae680d" containerName="oc" Feb 26 22:32:26 crc kubenswrapper[4910]: I0226 22:32:26.970435 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:26 crc kubenswrapper[4910]: I0226 22:32:26.990005 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pxft"] Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.100188 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-utilities\") pod \"redhat-marketplace-6pxft\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.100330 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-catalog-content\") pod \"redhat-marketplace-6pxft\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.100355 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6d8\" (UniqueName: \"kubernetes.io/projected/64445162-95b1-412e-9657-8c9b358cab71-kube-api-access-rd6d8\") pod \"redhat-marketplace-6pxft\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.202898 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-utilities\") pod \"redhat-marketplace-6pxft\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.203149 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-catalog-content\") pod \"redhat-marketplace-6pxft\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.203225 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd6d8\" (UniqueName: \"kubernetes.io/projected/64445162-95b1-412e-9657-8c9b358cab71-kube-api-access-rd6d8\") pod \"redhat-marketplace-6pxft\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.204556 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-catalog-content\") pod \"redhat-marketplace-6pxft\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.204600 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-utilities\") pod \"redhat-marketplace-6pxft\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.235262 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd6d8\" (UniqueName: \"kubernetes.io/projected/64445162-95b1-412e-9657-8c9b358cab71-kube-api-access-rd6d8\") pod \"redhat-marketplace-6pxft\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.293219 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:27 crc kubenswrapper[4910]: W0226 22:32:27.760692 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64445162_95b1_412e_9657_8c9b358cab71.slice/crio-1aecd73870b6c7318f889147b562cfb16933c866017fb031fe7057e1cc0d215d WatchSource:0}: Error finding container 1aecd73870b6c7318f889147b562cfb16933c866017fb031fe7057e1cc0d215d: Status 404 returned error can't find the container with id 1aecd73870b6c7318f889147b562cfb16933c866017fb031fe7057e1cc0d215d Feb 26 22:32:27 crc kubenswrapper[4910]: I0226 22:32:27.761614 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pxft"] Feb 26 22:32:28 crc kubenswrapper[4910]: I0226 22:32:28.136544 4910 generic.go:334] "Generic (PLEG): container finished" podID="64445162-95b1-412e-9657-8c9b358cab71" containerID="19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26" exitCode=0 Feb 26 22:32:28 crc kubenswrapper[4910]: I0226 22:32:28.136609 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pxft" event={"ID":"64445162-95b1-412e-9657-8c9b358cab71","Type":"ContainerDied","Data":"19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26"} Feb 26 22:32:28 crc kubenswrapper[4910]: I0226 22:32:28.136878 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pxft" event={"ID":"64445162-95b1-412e-9657-8c9b358cab71","Type":"ContainerStarted","Data":"1aecd73870b6c7318f889147b562cfb16933c866017fb031fe7057e1cc0d215d"} Feb 26 22:32:28 crc kubenswrapper[4910]: I0226 22:32:28.139209 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:32:29 crc kubenswrapper[4910]: I0226 22:32:29.149867 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pxft" event={"ID":"64445162-95b1-412e-9657-8c9b358cab71","Type":"ContainerStarted","Data":"87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2"} Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.168522 4910 generic.go:334] "Generic (PLEG): container finished" podID="64445162-95b1-412e-9657-8c9b358cab71" containerID="87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2" exitCode=0 Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.168894 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pxft" event={"ID":"64445162-95b1-412e-9657-8c9b358cab71","Type":"ContainerDied","Data":"87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2"} Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.171944 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7qrt8"] Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.178056 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.239368 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qrt8"] Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.284481 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbxkz\" (UniqueName: \"kubernetes.io/projected/26a835fe-b180-43c8-a85c-a4e15f2573e7-kube-api-access-nbxkz\") pod \"certified-operators-7qrt8\" (UID: \"26a835fe-b180-43c8-a85c-a4e15f2573e7\") " pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.284594 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a835fe-b180-43c8-a85c-a4e15f2573e7-utilities\") pod \"certified-operators-7qrt8\" (UID: \"26a835fe-b180-43c8-a85c-a4e15f2573e7\") " pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.284620 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a835fe-b180-43c8-a85c-a4e15f2573e7-catalog-content\") pod \"certified-operators-7qrt8\" (UID: \"26a835fe-b180-43c8-a85c-a4e15f2573e7\") " pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.386668 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a835fe-b180-43c8-a85c-a4e15f2573e7-utilities\") pod \"certified-operators-7qrt8\" (UID: \"26a835fe-b180-43c8-a85c-a4e15f2573e7\") " pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.386718 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a835fe-b180-43c8-a85c-a4e15f2573e7-catalog-content\") pod \"certified-operators-7qrt8\" (UID: \"26a835fe-b180-43c8-a85c-a4e15f2573e7\") " pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.386925 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbxkz\" (UniqueName: \"kubernetes.io/projected/26a835fe-b180-43c8-a85c-a4e15f2573e7-kube-api-access-nbxkz\") pod \"certified-operators-7qrt8\" (UID: \"26a835fe-b180-43c8-a85c-a4e15f2573e7\") " pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.387219 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a835fe-b180-43c8-a85c-a4e15f2573e7-utilities\") pod \"certified-operators-7qrt8\" (UID: \"26a835fe-b180-43c8-a85c-a4e15f2573e7\") " pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.387526 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a835fe-b180-43c8-a85c-a4e15f2573e7-catalog-content\") pod \"certified-operators-7qrt8\" (UID: \"26a835fe-b180-43c8-a85c-a4e15f2573e7\") " pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.430301 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbxkz\" (UniqueName: \"kubernetes.io/projected/26a835fe-b180-43c8-a85c-a4e15f2573e7-kube-api-access-nbxkz\") pod \"certified-operators-7qrt8\" (UID: \"26a835fe-b180-43c8-a85c-a4e15f2573e7\") " pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:30 crc kubenswrapper[4910]: I0226 22:32:30.502817 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:31 crc kubenswrapper[4910]: I0226 22:32:31.092288 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qrt8"] Feb 26 22:32:31 crc kubenswrapper[4910]: I0226 22:32:31.188760 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pxft" event={"ID":"64445162-95b1-412e-9657-8c9b358cab71","Type":"ContainerStarted","Data":"354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7"} Feb 26 22:32:31 crc kubenswrapper[4910]: I0226 22:32:31.191056 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qrt8" event={"ID":"26a835fe-b180-43c8-a85c-a4e15f2573e7","Type":"ContainerStarted","Data":"f499ae26c92dcac752175a6808b4cdd7014a7a3f127f926499b801443a034f5c"} Feb 26 22:32:31 crc kubenswrapper[4910]: I0226 22:32:31.228585 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6pxft" podStartSLOduration=2.716607097 podStartE2EDuration="5.228568818s" podCreationTimestamp="2026-02-26 22:32:26 +0000 UTC" firstStartedPulling="2026-02-26 22:32:28.13889927 +0000 UTC m=+2233.218389811" lastFinishedPulling="2026-02-26 22:32:30.650860991 +0000 UTC m=+2235.730351532" observedRunningTime="2026-02-26 22:32:31.221967508 +0000 UTC m=+2236.301458059" watchObservedRunningTime="2026-02-26 22:32:31.228568818 +0000 UTC m=+2236.308059359" Feb 26 22:32:32 crc kubenswrapper[4910]: I0226 22:32:32.201898 4910 generic.go:334] "Generic (PLEG): container finished" podID="26a835fe-b180-43c8-a85c-a4e15f2573e7" containerID="b77e05a9c2737e18e7e50a74cdfcdfbc3dd2fbc9d31a6d57023412c8074f5953" exitCode=0 Feb 26 22:32:32 crc kubenswrapper[4910]: I0226 22:32:32.201975 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qrt8" event={"ID":"26a835fe-b180-43c8-a85c-a4e15f2573e7","Type":"ContainerDied","Data":"b77e05a9c2737e18e7e50a74cdfcdfbc3dd2fbc9d31a6d57023412c8074f5953"} Feb 26 22:32:34 crc kubenswrapper[4910]: I0226 22:32:34.962959 4910 scope.go:117] "RemoveContainer" containerID="6fe2febede6855a498f91797447efdd4452ed4eaa7b76963b937feccfe0189cc" Feb 26 22:32:37 crc kubenswrapper[4910]: I0226 22:32:37.293923 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:37 crc kubenswrapper[4910]: I0226 22:32:37.297606 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:37 crc kubenswrapper[4910]: I0226 22:32:37.349119 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:38 crc kubenswrapper[4910]: I0226 22:32:38.349252 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:38 crc kubenswrapper[4910]: I0226 22:32:38.451069 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pxft"] Feb 26 22:32:39 crc kubenswrapper[4910]: I0226 22:32:39.295431 4910 generic.go:334] "Generic (PLEG): container finished" podID="26a835fe-b180-43c8-a85c-a4e15f2573e7" containerID="a691221b6e4e28b8bbe853c8b20b2512e14f28cdddbe7503ec1a644d5fb26eae" exitCode=0 Feb 26 22:32:39 crc kubenswrapper[4910]: I0226 22:32:39.295537 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qrt8" event={"ID":"26a835fe-b180-43c8-a85c-a4e15f2573e7","Type":"ContainerDied","Data":"a691221b6e4e28b8bbe853c8b20b2512e14f28cdddbe7503ec1a644d5fb26eae"} Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.306882 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qrt8" event={"ID":"26a835fe-b180-43c8-a85c-a4e15f2573e7","Type":"ContainerStarted","Data":"fce77ae87af4d4cf733a1ffd523f0e0dd6697569fd98462c6aaf8965ed4cc443"} Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.307112 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6pxft" podUID="64445162-95b1-412e-9657-8c9b358cab71" containerName="registry-server" containerID="cri-o://354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7" gracePeriod=2 Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.339685 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7qrt8" podStartSLOduration=2.793717461 podStartE2EDuration="10.339666377s" podCreationTimestamp="2026-02-26 22:32:30 +0000 UTC" firstStartedPulling="2026-02-26 22:32:32.204717316 +0000 UTC m=+2237.284207867" lastFinishedPulling="2026-02-26 22:32:39.750666222 +0000 UTC m=+2244.830156783" observedRunningTime="2026-02-26 22:32:40.329467559 +0000 UTC m=+2245.408958110" watchObservedRunningTime="2026-02-26 22:32:40.339666377 +0000 UTC m=+2245.419156928" Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.503063 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.503133 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.864081 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.955755 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-catalog-content\") pod \"64445162-95b1-412e-9657-8c9b358cab71\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.956265 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd6d8\" (UniqueName: \"kubernetes.io/projected/64445162-95b1-412e-9657-8c9b358cab71-kube-api-access-rd6d8\") pod \"64445162-95b1-412e-9657-8c9b358cab71\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.956374 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-utilities\") pod \"64445162-95b1-412e-9657-8c9b358cab71\" (UID: \"64445162-95b1-412e-9657-8c9b358cab71\") " Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.957367 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-utilities" (OuterVolumeSpecName: "utilities") pod "64445162-95b1-412e-9657-8c9b358cab71" (UID: "64445162-95b1-412e-9657-8c9b358cab71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.968046 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64445162-95b1-412e-9657-8c9b358cab71-kube-api-access-rd6d8" (OuterVolumeSpecName: "kube-api-access-rd6d8") pod "64445162-95b1-412e-9657-8c9b358cab71" (UID: "64445162-95b1-412e-9657-8c9b358cab71"). InnerVolumeSpecName "kube-api-access-rd6d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:32:40 crc kubenswrapper[4910]: I0226 22:32:40.980588 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64445162-95b1-412e-9657-8c9b358cab71" (UID: "64445162-95b1-412e-9657-8c9b358cab71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.059009 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.059056 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd6d8\" (UniqueName: \"kubernetes.io/projected/64445162-95b1-412e-9657-8c9b358cab71-kube-api-access-rd6d8\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.059068 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64445162-95b1-412e-9657-8c9b358cab71-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.323794 4910 generic.go:334] "Generic (PLEG): container finished" podID="64445162-95b1-412e-9657-8c9b358cab71" containerID="354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7" exitCode=0 Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.324980 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pxft" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.325038 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pxft" event={"ID":"64445162-95b1-412e-9657-8c9b358cab71","Type":"ContainerDied","Data":"354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7"} Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.325110 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pxft" event={"ID":"64445162-95b1-412e-9657-8c9b358cab71","Type":"ContainerDied","Data":"1aecd73870b6c7318f889147b562cfb16933c866017fb031fe7057e1cc0d215d"} Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.325136 4910 scope.go:117] "RemoveContainer" containerID="354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.367251 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pxft"] Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.370061 4910 scope.go:117] "RemoveContainer" containerID="87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.377353 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pxft"] Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.397853 4910 scope.go:117] "RemoveContainer" containerID="19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.449014 4910 scope.go:117] "RemoveContainer" containerID="354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7" Feb 26 22:32:41 crc kubenswrapper[4910]: E0226 22:32:41.451414 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7\": container with ID starting with 354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7 not found: ID does not exist" containerID="354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.451470 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7"} err="failed to get container status \"354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7\": rpc error: code = NotFound desc = could not find container \"354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7\": container with ID starting with 354dcdd429df6781fbe1b9d9466bf8adf0b31356ed86f58deedcc32160162af7 not found: ID does not exist" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.451502 4910 scope.go:117] "RemoveContainer" containerID="87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2" Feb 26 22:32:41 crc kubenswrapper[4910]: E0226 22:32:41.452293 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2\": container with ID starting with 87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2 not found: ID does not exist" containerID="87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.452331 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2"} err="failed to get container status \"87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2\": rpc error: code = NotFound desc = could not find container \"87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2\": container with ID starting with 87e5ab0e0927dbbcd7a290bebdb4fc88469797eb0f548bc717f510d1040229e2 not found: ID does not exist" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.452355 4910 scope.go:117] "RemoveContainer" containerID="19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26" Feb 26 22:32:41 crc kubenswrapper[4910]: E0226 22:32:41.452707 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26\": container with ID starting with 19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26 not found: ID does not exist" containerID="19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.452758 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26"} err="failed to get container status \"19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26\": rpc error: code = NotFound desc = could not find container \"19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26\": container with ID starting with 19318366429ebf3b6222a0a53a3007e2d621d356a401c7b8e4e2b5ce09346b26 not found: ID does not exist" Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.595736 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7qrt8" podUID="26a835fe-b180-43c8-a85c-a4e15f2573e7" containerName="registry-server" probeResult="failure" output=< Feb 26 22:32:41 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:32:41 crc kubenswrapper[4910]: > Feb 26 22:32:41 crc kubenswrapper[4910]: I0226 22:32:41.912357 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64445162-95b1-412e-9657-8c9b358cab71" path="/var/lib/kubelet/pods/64445162-95b1-412e-9657-8c9b358cab71/volumes" Feb 26 22:32:50 crc kubenswrapper[4910]: I0226 22:32:50.591220 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:50 crc kubenswrapper[4910]: I0226 22:32:50.667685 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7qrt8" Feb 26 22:32:50 crc kubenswrapper[4910]: I0226 22:32:50.802987 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qrt8"] Feb 26 22:32:50 crc kubenswrapper[4910]: I0226 22:32:50.843525 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knsd9"] Feb 26 22:32:50 crc kubenswrapper[4910]: I0226 22:32:50.843750 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-knsd9" podUID="59affc45-31f5-446b-8843-950a714f4c7d" containerName="registry-server" containerID="cri-o://9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed" gracePeriod=2 Feb 26 22:32:51 crc kubenswrapper[4910]: E0226 22:32:51.051271 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59affc45_31f5_446b_8843_950a714f4c7d.slice/crio-conmon-9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c52b5b0_2d14_4120_bfc4_1b2d73bcb4b3.slice/crio-ee5eb42bd6a87983aa55ed38943ad18189e2a62729920aa51df63e7d22e84bac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59affc45_31f5_446b_8843_950a714f4c7d.slice/crio-9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:32:51 crc kubenswrapper[4910]: E0226 22:32:51.274784 4910 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed is running failed: container process not found" containerID="9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 22:32:51 crc kubenswrapper[4910]: E0226 22:32:51.275628 4910 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed is running failed: container process not found" containerID="9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 22:32:51 crc kubenswrapper[4910]: E0226 22:32:51.275942 4910 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed is running failed: container process not found" containerID="9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 22:32:51 crc kubenswrapper[4910]: E0226 22:32:51.275982 4910 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-knsd9" podUID="59affc45-31f5-446b-8843-950a714f4c7d" containerName="registry-server" Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.422928 4910 generic.go:334] "Generic (PLEG): container finished" podID="4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3" containerID="ee5eb42bd6a87983aa55ed38943ad18189e2a62729920aa51df63e7d22e84bac" exitCode=0 Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.422983 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" event={"ID":"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3","Type":"ContainerDied","Data":"ee5eb42bd6a87983aa55ed38943ad18189e2a62729920aa51df63e7d22e84bac"} Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.428730 4910 generic.go:334] "Generic (PLEG): container finished" podID="59affc45-31f5-446b-8843-950a714f4c7d" containerID="9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed" exitCode=0 Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.428980 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knsd9" event={"ID":"59affc45-31f5-446b-8843-950a714f4c7d","Type":"ContainerDied","Data":"9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed"} Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.429061 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knsd9" event={"ID":"59affc45-31f5-446b-8843-950a714f4c7d","Type":"ContainerDied","Data":"c46061e388212695bbd8a7fdf0659b9e66d623c96c06ad1f07c2d82d315f3345"} Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.429116 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c46061e388212695bbd8a7fdf0659b9e66d623c96c06ad1f07c2d82d315f3345" Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.482041 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.617364 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5q5v\" (UniqueName: \"kubernetes.io/projected/59affc45-31f5-446b-8843-950a714f4c7d-kube-api-access-t5q5v\") pod \"59affc45-31f5-446b-8843-950a714f4c7d\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.617567 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-catalog-content\") pod \"59affc45-31f5-446b-8843-950a714f4c7d\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.617653 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-utilities\") pod \"59affc45-31f5-446b-8843-950a714f4c7d\" (UID: \"59affc45-31f5-446b-8843-950a714f4c7d\") " Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.620324 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-utilities" (OuterVolumeSpecName: "utilities") pod "59affc45-31f5-446b-8843-950a714f4c7d" (UID: "59affc45-31f5-446b-8843-950a714f4c7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.627470 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59affc45-31f5-446b-8843-950a714f4c7d-kube-api-access-t5q5v" (OuterVolumeSpecName: "kube-api-access-t5q5v") pod "59affc45-31f5-446b-8843-950a714f4c7d" (UID: "59affc45-31f5-446b-8843-950a714f4c7d"). InnerVolumeSpecName "kube-api-access-t5q5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.682070 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59affc45-31f5-446b-8843-950a714f4c7d" (UID: "59affc45-31f5-446b-8843-950a714f4c7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.719720 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.719749 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59affc45-31f5-446b-8843-950a714f4c7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:51 crc kubenswrapper[4910]: I0226 22:32:51.719758 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5q5v\" (UniqueName: \"kubernetes.io/projected/59affc45-31f5-446b-8843-950a714f4c7d-kube-api-access-t5q5v\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:52 crc kubenswrapper[4910]: I0226 22:32:52.437622 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knsd9" Feb 26 22:32:52 crc kubenswrapper[4910]: I0226 22:32:52.468803 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knsd9"] Feb 26 22:32:52 crc kubenswrapper[4910]: I0226 22:32:52.495129 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-knsd9"] Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.009497 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.044806 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovncontroller-config-0\") pod \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.044878 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-inventory\") pod \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.044904 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ssh-key-openstack-edpm-ipam\") pod \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.070297 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3" (UID: "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.072890 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3" (UID: "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.084603 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-inventory" (OuterVolumeSpecName: "inventory") pod "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3" (UID: "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.146603 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovn-combined-ca-bundle\") pod \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.146720 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hth9z\" (UniqueName: \"kubernetes.io/projected/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-kube-api-access-hth9z\") pod \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\" (UID: \"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3\") " Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.148075 4910 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.148130 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.148192 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.155367 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-kube-api-access-hth9z" (OuterVolumeSpecName: "kube-api-access-hth9z") pod "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3" (UID: "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3"). InnerVolumeSpecName "kube-api-access-hth9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.157481 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3" (UID: "4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.250567 4910 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.250618 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hth9z\" (UniqueName: \"kubernetes.io/projected/4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3-kube-api-access-hth9z\") on node \"crc\" DevicePath \"\"" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.452707 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.452612 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x8xs6" event={"ID":"4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3","Type":"ContainerDied","Data":"b634939138efd247b9ce1f932bb67528eb370342799da173817143ec2e5da23a"} Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.453319 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b634939138efd247b9ce1f932bb67528eb370342799da173817143ec2e5da23a" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.570545 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l"] Feb 26 22:32:53 crc kubenswrapper[4910]: E0226 22:32:53.571078 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59affc45-31f5-446b-8843-950a714f4c7d" containerName="extract-utilities" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.571137 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="59affc45-31f5-446b-8843-950a714f4c7d" containerName="extract-utilities" Feb 26 22:32:53 crc kubenswrapper[4910]: E0226 22:32:53.571240 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64445162-95b1-412e-9657-8c9b358cab71" containerName="extract-utilities" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.571295 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="64445162-95b1-412e-9657-8c9b358cab71" containerName="extract-utilities" Feb 26 22:32:53 crc kubenswrapper[4910]: E0226 22:32:53.571427 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59affc45-31f5-446b-8843-950a714f4c7d" containerName="extract-content" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.571476 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="59affc45-31f5-446b-8843-950a714f4c7d" containerName="extract-content" Feb 26 22:32:53 crc kubenswrapper[4910]: E0226 22:32:53.571529 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59affc45-31f5-446b-8843-950a714f4c7d" containerName="registry-server" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.571574 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="59affc45-31f5-446b-8843-950a714f4c7d" containerName="registry-server" Feb 26 22:32:53 crc kubenswrapper[4910]: E0226 22:32:53.571628 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64445162-95b1-412e-9657-8c9b358cab71" containerName="extract-content" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.571673 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="64445162-95b1-412e-9657-8c9b358cab71" containerName="extract-content" Feb 26 22:32:53 crc kubenswrapper[4910]: E0226 22:32:53.571745 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64445162-95b1-412e-9657-8c9b358cab71" containerName="registry-server" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.571797 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="64445162-95b1-412e-9657-8c9b358cab71" containerName="registry-server" Feb 26 22:32:53 crc kubenswrapper[4910]: E0226 22:32:53.571851 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.571897 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.572131 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="59affc45-31f5-446b-8843-950a714f4c7d" containerName="registry-server" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.572301 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="64445162-95b1-412e-9657-8c9b358cab71" containerName="registry-server" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.572368 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.573041 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.575724 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.576377 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.576834 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.577065 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.577297 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.577485 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.591746 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l"] Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.665494 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.665810 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.665884 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.666110 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.666150 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzxxt\" (UniqueName: \"kubernetes.io/projected/af2ace9c-60af-47a0-992d-a7961e07f840-kube-api-access-fzxxt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.666403 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.767398 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.767727 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.767829 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.767937 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.768060 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.768133 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzxxt\" (UniqueName: \"kubernetes.io/projected/af2ace9c-60af-47a0-992d-a7961e07f840-kube-api-access-fzxxt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.774147 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.774240 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.774428 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.775329 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.777757 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.784956 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzxxt\" (UniqueName: \"kubernetes.io/projected/af2ace9c-60af-47a0-992d-a7961e07f840-kube-api-access-fzxxt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.917014 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59affc45-31f5-446b-8843-950a714f4c7d" path="/var/lib/kubelet/pods/59affc45-31f5-446b-8843-950a714f4c7d/volumes" Feb 26 22:32:53 crc kubenswrapper[4910]: I0226 22:32:53.960277 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:32:54 crc kubenswrapper[4910]: I0226 22:32:54.561885 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l"] Feb 26 22:32:55 crc kubenswrapper[4910]: I0226 22:32:55.475576 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" event={"ID":"af2ace9c-60af-47a0-992d-a7961e07f840","Type":"ContainerStarted","Data":"d51708e5b0d1f5005a10b94f212c9f43790e2578d83d75750d9446b4041891ad"} Feb 26 22:32:55 crc kubenswrapper[4910]: I0226 22:32:55.475638 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" event={"ID":"af2ace9c-60af-47a0-992d-a7961e07f840","Type":"ContainerStarted","Data":"d4ec987e99ad4e10d6c4d132e3b84bd01d041a207fa4ebe6a3f99562d6235174"} Feb 26 22:32:55 crc kubenswrapper[4910]: I0226 22:32:55.499256 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" podStartSLOduration=2.078781642 podStartE2EDuration="2.499230883s" podCreationTimestamp="2026-02-26 22:32:53 +0000 UTC" firstStartedPulling="2026-02-26 22:32:54.565725938 +0000 UTC m=+2259.645216519" lastFinishedPulling="2026-02-26 22:32:54.986175219 +0000 UTC m=+2260.065665760" observedRunningTime="2026-02-26 22:32:55.492434408 +0000 UTC m=+2260.571924959" watchObservedRunningTime="2026-02-26 22:32:55.499230883 +0000 UTC m=+2260.578721464" Feb 26 22:32:55 crc kubenswrapper[4910]: I0226 22:32:55.727475 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:32:55 crc kubenswrapper[4910]: I0226 22:32:55.729266 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:33:25 crc kubenswrapper[4910]: I0226 22:33:25.727618 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:33:25 crc kubenswrapper[4910]: I0226 22:33:25.728381 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:33:25 crc kubenswrapper[4910]: I0226 22:33:25.728461 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:33:25 crc kubenswrapper[4910]: I0226 22:33:25.729926 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82909def5d89987cafbb718608b443b77e385492453e4fd861b04d0417660d57"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:33:25 crc kubenswrapper[4910]: I0226 22:33:25.730067 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://82909def5d89987cafbb718608b443b77e385492453e4fd861b04d0417660d57" gracePeriod=600 Feb 26 22:33:26 crc kubenswrapper[4910]: I0226 22:33:26.838570 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="82909def5d89987cafbb718608b443b77e385492453e4fd861b04d0417660d57" exitCode=0 Feb 26 22:33:26 crc kubenswrapper[4910]: I0226 22:33:26.838621 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"82909def5d89987cafbb718608b443b77e385492453e4fd861b04d0417660d57"} Feb 26 22:33:26 crc kubenswrapper[4910]: I0226 22:33:26.839035 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20"} Feb 26 22:33:26 crc kubenswrapper[4910]: I0226 22:33:26.839057 4910 scope.go:117] "RemoveContainer" containerID="a611ef9b29eb0331fc83f82d43ca004ae2000916705108882846bfaf22da81d6" Feb 26 22:33:38 crc kubenswrapper[4910]: I0226 22:33:38.528746 4910 scope.go:117] "RemoveContainer" containerID="ec788b2c869bf46d0223a62477f819574d4751e365d5debd515d2c187417fcf5" Feb 26 22:33:38 crc kubenswrapper[4910]: I0226 22:33:38.562557 4910 scope.go:117] "RemoveContainer" containerID="9c36092c2596f03aca4495a67673f5fc3571a7f56aa1cc62c35e0273b2da37ed" Feb 26 22:33:38 crc kubenswrapper[4910]: I0226 22:33:38.631195 4910 scope.go:117] "RemoveContainer" containerID="33d98952479d992b68c27226a167113843e81e55218cc381e216353d0fa969cd" Feb 26 22:33:49 crc kubenswrapper[4910]: I0226 22:33:49.159599 4910 generic.go:334] "Generic (PLEG): container finished" podID="af2ace9c-60af-47a0-992d-a7961e07f840" containerID="d51708e5b0d1f5005a10b94f212c9f43790e2578d83d75750d9446b4041891ad" exitCode=0 Feb 26 22:33:49 crc kubenswrapper[4910]: I0226 22:33:49.159653 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" event={"ID":"af2ace9c-60af-47a0-992d-a7961e07f840","Type":"ContainerDied","Data":"d51708e5b0d1f5005a10b94f212c9f43790e2578d83d75750d9446b4041891ad"} Feb 26 22:33:50 crc kubenswrapper[4910]: I0226 22:33:50.866758 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:33:50 crc kubenswrapper[4910]: I0226 22:33:50.976969 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-ssh-key-openstack-edpm-ipam\") pod \"af2ace9c-60af-47a0-992d-a7961e07f840\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " Feb 26 22:33:50 crc kubenswrapper[4910]: I0226 22:33:50.977055 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzxxt\" (UniqueName: \"kubernetes.io/projected/af2ace9c-60af-47a0-992d-a7961e07f840-kube-api-access-fzxxt\") pod \"af2ace9c-60af-47a0-992d-a7961e07f840\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " Feb 26 22:33:50 crc kubenswrapper[4910]: I0226 22:33:50.977118 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-inventory\") pod \"af2ace9c-60af-47a0-992d-a7961e07f840\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " Feb 26 22:33:50 crc kubenswrapper[4910]: I0226 22:33:50.977224 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-ovn-metadata-agent-neutron-config-0\") pod \"af2ace9c-60af-47a0-992d-a7961e07f840\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " Feb 26 22:33:50 crc kubenswrapper[4910]: I0226 22:33:50.977279 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-metadata-combined-ca-bundle\") pod \"af2ace9c-60af-47a0-992d-a7961e07f840\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " Feb 26 22:33:50 crc kubenswrapper[4910]: I0226 22:33:50.977361 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-nova-metadata-neutron-config-0\") pod \"af2ace9c-60af-47a0-992d-a7961e07f840\" (UID: \"af2ace9c-60af-47a0-992d-a7961e07f840\") " Feb 26 22:33:50 crc kubenswrapper[4910]: I0226 22:33:50.983640 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af2ace9c-60af-47a0-992d-a7961e07f840-kube-api-access-fzxxt" (OuterVolumeSpecName: "kube-api-access-fzxxt") pod "af2ace9c-60af-47a0-992d-a7961e07f840" (UID: "af2ace9c-60af-47a0-992d-a7961e07f840"). InnerVolumeSpecName "kube-api-access-fzxxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:33:50 crc kubenswrapper[4910]: I0226 22:33:50.985196 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "af2ace9c-60af-47a0-992d-a7961e07f840" (UID: "af2ace9c-60af-47a0-992d-a7961e07f840"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.016607 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "af2ace9c-60af-47a0-992d-a7961e07f840" (UID: "af2ace9c-60af-47a0-992d-a7961e07f840"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.030552 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af2ace9c-60af-47a0-992d-a7961e07f840" (UID: "af2ace9c-60af-47a0-992d-a7961e07f840"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.032145 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-inventory" (OuterVolumeSpecName: "inventory") pod "af2ace9c-60af-47a0-992d-a7961e07f840" (UID: "af2ace9c-60af-47a0-992d-a7961e07f840"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.035970 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "af2ace9c-60af-47a0-992d-a7961e07f840" (UID: "af2ace9c-60af-47a0-992d-a7961e07f840"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.080585 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.080622 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzxxt\" (UniqueName: \"kubernetes.io/projected/af2ace9c-60af-47a0-992d-a7961e07f840-kube-api-access-fzxxt\") on node \"crc\" DevicePath \"\"" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.080633 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.080663 4910 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.080676 4910 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.080687 4910 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/af2ace9c-60af-47a0-992d-a7961e07f840-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.183503 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" event={"ID":"af2ace9c-60af-47a0-992d-a7961e07f840","Type":"ContainerDied","Data":"d4ec987e99ad4e10d6c4d132e3b84bd01d041a207fa4ebe6a3f99562d6235174"} Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.183559 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4ec987e99ad4e10d6c4d132e3b84bd01d041a207fa4ebe6a3f99562d6235174" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.183561 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.287355 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n"] Feb 26 22:33:51 crc kubenswrapper[4910]: E0226 22:33:51.288002 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2ace9c-60af-47a0-992d-a7961e07f840" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.288034 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2ace9c-60af-47a0-992d-a7961e07f840" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.288403 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="af2ace9c-60af-47a0-992d-a7961e07f840" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.289469 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.291955 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.294033 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.294130 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.294248 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.295683 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.298393 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n"] Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.411122 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.411201 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.411289 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.411334 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.411440 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn4l4\" (UniqueName: \"kubernetes.io/projected/7408a6c2-235e-4149-9219-c5f71e983e62-kube-api-access-pn4l4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.513140 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.513227 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.513283 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.513318 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.513416 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn4l4\" (UniqueName: \"kubernetes.io/projected/7408a6c2-235e-4149-9219-c5f71e983e62-kube-api-access-pn4l4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.523381 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.523626 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.524056 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.524282 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.536023 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn4l4\" (UniqueName: \"kubernetes.io/projected/7408a6c2-235e-4149-9219-c5f71e983e62-kube-api-access-pn4l4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:51 crc kubenswrapper[4910]: I0226 22:33:51.643782 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:33:52 crc kubenswrapper[4910]: I0226 22:33:52.267466 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n"] Feb 26 22:33:53 crc kubenswrapper[4910]: I0226 22:33:53.204943 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" event={"ID":"7408a6c2-235e-4149-9219-c5f71e983e62","Type":"ContainerStarted","Data":"9b68ac2070757974d550c30936bc38cef07f181ee001a0021b1834fea99a8068"} Feb 26 22:33:53 crc kubenswrapper[4910]: I0226 22:33:53.205449 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" event={"ID":"7408a6c2-235e-4149-9219-c5f71e983e62","Type":"ContainerStarted","Data":"51924132abedd2a70c071f887c204be0dc116e575e82a4e91a6ffc75b3c7120c"} Feb 26 22:33:53 crc kubenswrapper[4910]: I0226 22:33:53.231704 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" podStartSLOduration=1.767118269 podStartE2EDuration="2.231684952s" podCreationTimestamp="2026-02-26 22:33:51 +0000 UTC" firstStartedPulling="2026-02-26 22:33:52.267309255 +0000 UTC m=+2317.346799816" lastFinishedPulling="2026-02-26 22:33:52.731875958 +0000 UTC m=+2317.811366499" observedRunningTime="2026-02-26 22:33:53.225413631 +0000 UTC m=+2318.304904172" watchObservedRunningTime="2026-02-26 22:33:53.231684952 +0000 UTC m=+2318.311175493" Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.157011 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535754-gcdbg"] Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.160400 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535754-gcdbg" Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.163471 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.164543 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.165901 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.175504 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535754-gcdbg"] Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.223347 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhfh\" (UniqueName: \"kubernetes.io/projected/d77142ae-dd7f-443f-8083-a62047e6553e-kube-api-access-pqhfh\") pod \"auto-csr-approver-29535754-gcdbg\" (UID: \"d77142ae-dd7f-443f-8083-a62047e6553e\") " pod="openshift-infra/auto-csr-approver-29535754-gcdbg" Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.326801 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqhfh\" (UniqueName: \"kubernetes.io/projected/d77142ae-dd7f-443f-8083-a62047e6553e-kube-api-access-pqhfh\") pod \"auto-csr-approver-29535754-gcdbg\" (UID: \"d77142ae-dd7f-443f-8083-a62047e6553e\") " pod="openshift-infra/auto-csr-approver-29535754-gcdbg" Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.349987 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqhfh\" (UniqueName: \"kubernetes.io/projected/d77142ae-dd7f-443f-8083-a62047e6553e-kube-api-access-pqhfh\") pod \"auto-csr-approver-29535754-gcdbg\" (UID: \"d77142ae-dd7f-443f-8083-a62047e6553e\") " pod="openshift-infra/auto-csr-approver-29535754-gcdbg" Feb 26 22:34:00 crc kubenswrapper[4910]: I0226 22:34:00.506890 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535754-gcdbg" Feb 26 22:34:01 crc kubenswrapper[4910]: I0226 22:34:01.025698 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535754-gcdbg"] Feb 26 22:34:01 crc kubenswrapper[4910]: I0226 22:34:01.295224 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535754-gcdbg" event={"ID":"d77142ae-dd7f-443f-8083-a62047e6553e","Type":"ContainerStarted","Data":"63ab63765d9220368996f714445f3fa6f9612867c5b345cb26d035026e12d862"} Feb 26 22:34:02 crc kubenswrapper[4910]: I0226 22:34:02.308763 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535754-gcdbg" event={"ID":"d77142ae-dd7f-443f-8083-a62047e6553e","Type":"ContainerStarted","Data":"e1e8923ab2125b9655ead42d908c0436a59a083c7a6c40b739e0b0e6ee1cd27f"} Feb 26 22:34:02 crc kubenswrapper[4910]: I0226 22:34:02.323981 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535754-gcdbg" podStartSLOduration=1.36363224 podStartE2EDuration="2.323965557s" podCreationTimestamp="2026-02-26 22:34:00 +0000 UTC" firstStartedPulling="2026-02-26 22:34:01.028252499 +0000 UTC m=+2326.107743040" lastFinishedPulling="2026-02-26 22:34:01.988585816 +0000 UTC m=+2327.068076357" observedRunningTime="2026-02-26 22:34:02.323814573 +0000 UTC m=+2327.403305134" watchObservedRunningTime="2026-02-26 22:34:02.323965557 +0000 UTC m=+2327.403456098" Feb 26 22:34:03 crc kubenswrapper[4910]: I0226 22:34:03.319491 4910 generic.go:334] "Generic (PLEG): container finished" podID="d77142ae-dd7f-443f-8083-a62047e6553e" containerID="e1e8923ab2125b9655ead42d908c0436a59a083c7a6c40b739e0b0e6ee1cd27f" exitCode=0 Feb 26 22:34:03 crc kubenswrapper[4910]: I0226 22:34:03.319662 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535754-gcdbg" event={"ID":"d77142ae-dd7f-443f-8083-a62047e6553e","Type":"ContainerDied","Data":"e1e8923ab2125b9655ead42d908c0436a59a083c7a6c40b739e0b0e6ee1cd27f"} Feb 26 22:34:04 crc kubenswrapper[4910]: I0226 22:34:04.829705 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535754-gcdbg" Feb 26 22:34:04 crc kubenswrapper[4910]: I0226 22:34:04.977972 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqhfh\" (UniqueName: \"kubernetes.io/projected/d77142ae-dd7f-443f-8083-a62047e6553e-kube-api-access-pqhfh\") pod \"d77142ae-dd7f-443f-8083-a62047e6553e\" (UID: \"d77142ae-dd7f-443f-8083-a62047e6553e\") " Feb 26 22:34:04 crc kubenswrapper[4910]: I0226 22:34:04.986522 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d77142ae-dd7f-443f-8083-a62047e6553e-kube-api-access-pqhfh" (OuterVolumeSpecName: "kube-api-access-pqhfh") pod "d77142ae-dd7f-443f-8083-a62047e6553e" (UID: "d77142ae-dd7f-443f-8083-a62047e6553e"). InnerVolumeSpecName "kube-api-access-pqhfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:34:05 crc kubenswrapper[4910]: I0226 22:34:05.081057 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqhfh\" (UniqueName: \"kubernetes.io/projected/d77142ae-dd7f-443f-8083-a62047e6553e-kube-api-access-pqhfh\") on node \"crc\" DevicePath \"\"" Feb 26 22:34:05 crc kubenswrapper[4910]: I0226 22:34:05.342127 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535754-gcdbg" event={"ID":"d77142ae-dd7f-443f-8083-a62047e6553e","Type":"ContainerDied","Data":"63ab63765d9220368996f714445f3fa6f9612867c5b345cb26d035026e12d862"} Feb 26 22:34:05 crc kubenswrapper[4910]: I0226 22:34:05.342194 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ab63765d9220368996f714445f3fa6f9612867c5b345cb26d035026e12d862" Feb 26 22:34:05 crc kubenswrapper[4910]: I0226 22:34:05.342254 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535754-gcdbg" Feb 26 22:34:05 crc kubenswrapper[4910]: I0226 22:34:05.414220 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535748-rnqqr"] Feb 26 22:34:05 crc kubenswrapper[4910]: I0226 22:34:05.425426 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535748-rnqqr"] Feb 26 22:34:05 crc kubenswrapper[4910]: I0226 22:34:05.986265 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465f6383-fe83-4b80-9719-e17933b4054e" path="/var/lib/kubelet/pods/465f6383-fe83-4b80-9719-e17933b4054e/volumes" Feb 26 22:34:38 crc kubenswrapper[4910]: I0226 22:34:38.733918 4910 scope.go:117] "RemoveContainer" containerID="0d0aacb80788c60b42b841b26cb53e6fa314bd2ad4d419473fdc411cecc6a676" Feb 26 22:35:55 crc kubenswrapper[4910]: I0226 22:35:55.727224 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:35:55 crc kubenswrapper[4910]: I0226 22:35:55.727946 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.154368 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535756-h2hl8"] Feb 26 22:36:00 crc kubenswrapper[4910]: E0226 22:36:00.156858 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77142ae-dd7f-443f-8083-a62047e6553e" containerName="oc" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.157005 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77142ae-dd7f-443f-8083-a62047e6553e" containerName="oc" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.157491 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77142ae-dd7f-443f-8083-a62047e6553e" containerName="oc" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.158933 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535756-h2hl8" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.161337 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.161575 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.161631 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.178180 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535756-h2hl8"] Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.238937 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbjs8\" (UniqueName: \"kubernetes.io/projected/bc249d68-ea45-4301-b508-572a616bbb87-kube-api-access-wbjs8\") pod \"auto-csr-approver-29535756-h2hl8\" (UID: \"bc249d68-ea45-4301-b508-572a616bbb87\") " pod="openshift-infra/auto-csr-approver-29535756-h2hl8" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.341487 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbjs8\" (UniqueName: \"kubernetes.io/projected/bc249d68-ea45-4301-b508-572a616bbb87-kube-api-access-wbjs8\") pod \"auto-csr-approver-29535756-h2hl8\" (UID: \"bc249d68-ea45-4301-b508-572a616bbb87\") " pod="openshift-infra/auto-csr-approver-29535756-h2hl8" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.361015 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbjs8\" (UniqueName: \"kubernetes.io/projected/bc249d68-ea45-4301-b508-572a616bbb87-kube-api-access-wbjs8\") pod \"auto-csr-approver-29535756-h2hl8\" (UID: \"bc249d68-ea45-4301-b508-572a616bbb87\") " pod="openshift-infra/auto-csr-approver-29535756-h2hl8" Feb 26 22:36:00 crc kubenswrapper[4910]: I0226 22:36:00.484216 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535756-h2hl8" Feb 26 22:36:01 crc kubenswrapper[4910]: I0226 22:36:01.032258 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535756-h2hl8"] Feb 26 22:36:01 crc kubenswrapper[4910]: I0226 22:36:01.728839 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535756-h2hl8" event={"ID":"bc249d68-ea45-4301-b508-572a616bbb87","Type":"ContainerStarted","Data":"2891df40651e2d6f12f8ee8c2375d74ad3564bc224910f005c22072d61db4b34"} Feb 26 22:36:02 crc kubenswrapper[4910]: I0226 22:36:02.742542 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535756-h2hl8" event={"ID":"bc249d68-ea45-4301-b508-572a616bbb87","Type":"ContainerStarted","Data":"60b9772649900bd27d317dc22a8c85300531bcc056148c658674e31280d52edf"} Feb 26 22:36:02 crc kubenswrapper[4910]: I0226 22:36:02.759570 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535756-h2hl8" podStartSLOduration=1.625084904 podStartE2EDuration="2.759546816s" podCreationTimestamp="2026-02-26 22:36:00 +0000 UTC" firstStartedPulling="2026-02-26 22:36:01.033341139 +0000 UTC m=+2446.112831690" lastFinishedPulling="2026-02-26 22:36:02.167803051 +0000 UTC m=+2447.247293602" observedRunningTime="2026-02-26 22:36:02.757272063 +0000 UTC m=+2447.836762624" watchObservedRunningTime="2026-02-26 22:36:02.759546816 +0000 UTC m=+2447.839037377" Feb 26 22:36:03 crc kubenswrapper[4910]: I0226 22:36:03.758173 4910 generic.go:334] "Generic (PLEG): container finished" podID="bc249d68-ea45-4301-b508-572a616bbb87" containerID="60b9772649900bd27d317dc22a8c85300531bcc056148c658674e31280d52edf" exitCode=0 Feb 26 22:36:03 crc kubenswrapper[4910]: I0226 22:36:03.758227 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535756-h2hl8" event={"ID":"bc249d68-ea45-4301-b508-572a616bbb87","Type":"ContainerDied","Data":"60b9772649900bd27d317dc22a8c85300531bcc056148c658674e31280d52edf"} Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.240530 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535756-h2hl8" Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.360939 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbjs8\" (UniqueName: \"kubernetes.io/projected/bc249d68-ea45-4301-b508-572a616bbb87-kube-api-access-wbjs8\") pod \"bc249d68-ea45-4301-b508-572a616bbb87\" (UID: \"bc249d68-ea45-4301-b508-572a616bbb87\") " Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.367683 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc249d68-ea45-4301-b508-572a616bbb87-kube-api-access-wbjs8" (OuterVolumeSpecName: "kube-api-access-wbjs8") pod "bc249d68-ea45-4301-b508-572a616bbb87" (UID: "bc249d68-ea45-4301-b508-572a616bbb87"). InnerVolumeSpecName "kube-api-access-wbjs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.463472 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbjs8\" (UniqueName: \"kubernetes.io/projected/bc249d68-ea45-4301-b508-572a616bbb87-kube-api-access-wbjs8\") on node \"crc\" DevicePath \"\"" Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.782573 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535756-h2hl8" event={"ID":"bc249d68-ea45-4301-b508-572a616bbb87","Type":"ContainerDied","Data":"2891df40651e2d6f12f8ee8c2375d74ad3564bc224910f005c22072d61db4b34"} Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.782624 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2891df40651e2d6f12f8ee8c2375d74ad3564bc224910f005c22072d61db4b34" Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.782665 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535756-h2hl8" Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.862124 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535750-8n2rt"] Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.873298 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535750-8n2rt"] Feb 26 22:36:05 crc kubenswrapper[4910]: I0226 22:36:05.926881 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf549ee0-b6a6-4060-8dfe-2477749974d8" path="/var/lib/kubelet/pods/cf549ee0-b6a6-4060-8dfe-2477749974d8/volumes" Feb 26 22:36:25 crc kubenswrapper[4910]: I0226 22:36:25.727351 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:36:25 crc kubenswrapper[4910]: I0226 22:36:25.728060 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:36:38 crc kubenswrapper[4910]: I0226 22:36:38.892962 4910 scope.go:117] "RemoveContainer" containerID="e14d070263cd06cef003940fc2bb8c6b7788d4003fd6776f78e3de1617d5d7c3" Feb 26 22:36:55 crc kubenswrapper[4910]: I0226 22:36:55.727834 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:36:55 crc kubenswrapper[4910]: I0226 22:36:55.728500 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:36:55 crc kubenswrapper[4910]: I0226 22:36:55.728571 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:36:55 crc kubenswrapper[4910]: I0226 22:36:55.729724 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:36:55 crc kubenswrapper[4910]: I0226 22:36:55.729814 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" gracePeriod=600 Feb 26 22:36:55 crc kubenswrapper[4910]: E0226 22:36:55.887468 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:36:56 crc kubenswrapper[4910]: I0226 22:36:56.339928 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" exitCode=0 Feb 26 22:36:56 crc kubenswrapper[4910]: I0226 22:36:56.339977 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20"} Feb 26 22:36:56 crc kubenswrapper[4910]: I0226 22:36:56.340014 4910 scope.go:117] "RemoveContainer" containerID="82909def5d89987cafbb718608b443b77e385492453e4fd861b04d0417660d57" Feb 26 22:36:56 crc kubenswrapper[4910]: I0226 22:36:56.340886 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:36:56 crc kubenswrapper[4910]: E0226 22:36:56.341289 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:37:08 crc kubenswrapper[4910]: I0226 22:37:08.901749 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:37:08 crc kubenswrapper[4910]: E0226 22:37:08.903249 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:37:21 crc kubenswrapper[4910]: I0226 22:37:21.902098 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:37:21 crc kubenswrapper[4910]: E0226 22:37:21.903728 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:37:34 crc kubenswrapper[4910]: I0226 22:37:34.901977 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:37:34 crc kubenswrapper[4910]: E0226 22:37:34.902629 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:37:48 crc kubenswrapper[4910]: I0226 22:37:48.902932 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:37:48 crc kubenswrapper[4910]: E0226 22:37:48.904023 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:37:52 crc kubenswrapper[4910]: I0226 22:37:52.970381 4910 generic.go:334] "Generic (PLEG): container finished" podID="7408a6c2-235e-4149-9219-c5f71e983e62" containerID="9b68ac2070757974d550c30936bc38cef07f181ee001a0021b1834fea99a8068" exitCode=0 Feb 26 22:37:52 crc kubenswrapper[4910]: I0226 22:37:52.970484 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" event={"ID":"7408a6c2-235e-4149-9219-c5f71e983e62","Type":"ContainerDied","Data":"9b68ac2070757974d550c30936bc38cef07f181ee001a0021b1834fea99a8068"} Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.630496 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.727019 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn4l4\" (UniqueName: \"kubernetes.io/projected/7408a6c2-235e-4149-9219-c5f71e983e62-kube-api-access-pn4l4\") pod \"7408a6c2-235e-4149-9219-c5f71e983e62\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.727102 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-inventory\") pod \"7408a6c2-235e-4149-9219-c5f71e983e62\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.727243 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-combined-ca-bundle\") pod \"7408a6c2-235e-4149-9219-c5f71e983e62\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.727328 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-secret-0\") pod \"7408a6c2-235e-4149-9219-c5f71e983e62\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.727420 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-ssh-key-openstack-edpm-ipam\") pod \"7408a6c2-235e-4149-9219-c5f71e983e62\" (UID: \"7408a6c2-235e-4149-9219-c5f71e983e62\") " Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.733102 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7408a6c2-235e-4149-9219-c5f71e983e62" (UID: "7408a6c2-235e-4149-9219-c5f71e983e62"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.733276 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7408a6c2-235e-4149-9219-c5f71e983e62-kube-api-access-pn4l4" (OuterVolumeSpecName: "kube-api-access-pn4l4") pod "7408a6c2-235e-4149-9219-c5f71e983e62" (UID: "7408a6c2-235e-4149-9219-c5f71e983e62"). InnerVolumeSpecName "kube-api-access-pn4l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.757951 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7408a6c2-235e-4149-9219-c5f71e983e62" (UID: "7408a6c2-235e-4149-9219-c5f71e983e62"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.758908 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-inventory" (OuterVolumeSpecName: "inventory") pod "7408a6c2-235e-4149-9219-c5f71e983e62" (UID: "7408a6c2-235e-4149-9219-c5f71e983e62"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.768493 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7408a6c2-235e-4149-9219-c5f71e983e62" (UID: "7408a6c2-235e-4149-9219-c5f71e983e62"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.830607 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn4l4\" (UniqueName: \"kubernetes.io/projected/7408a6c2-235e-4149-9219-c5f71e983e62-kube-api-access-pn4l4\") on node \"crc\" DevicePath \"\"" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.830663 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.830682 4910 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.830702 4910 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:37:54 crc kubenswrapper[4910]: I0226 22:37:54.830721 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7408a6c2-235e-4149-9219-c5f71e983e62-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.007838 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" event={"ID":"7408a6c2-235e-4149-9219-c5f71e983e62","Type":"ContainerDied","Data":"51924132abedd2a70c071f887c204be0dc116e575e82a4e91a6ffc75b3c7120c"} Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.007901 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51924132abedd2a70c071f887c204be0dc116e575e82a4e91a6ffc75b3c7120c" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.007937 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.156740 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp"] Feb 26 22:37:55 crc kubenswrapper[4910]: E0226 22:37:55.157876 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc249d68-ea45-4301-b508-572a616bbb87" containerName="oc" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.157912 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc249d68-ea45-4301-b508-572a616bbb87" containerName="oc" Feb 26 22:37:55 crc kubenswrapper[4910]: E0226 22:37:55.157980 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7408a6c2-235e-4149-9219-c5f71e983e62" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.157995 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="7408a6c2-235e-4149-9219-c5f71e983e62" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.158382 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc249d68-ea45-4301-b508-572a616bbb87" containerName="oc" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.158433 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="7408a6c2-235e-4149-9219-c5f71e983e62" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.159565 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.163273 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.163284 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.163416 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.163857 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.164075 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.165274 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp"] Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.168958 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.169016 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.238749 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.238819 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.238855 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dtxc\" (UniqueName: \"kubernetes.io/projected/42c909bc-0493-4de3-882f-c6ebf8967f27-kube-api-access-5dtxc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.239002 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.239111 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.239151 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.239311 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.239455 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.239517 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.239556 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.239596 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342078 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342250 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342316 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342354 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342393 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342457 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342499 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dtxc\" (UniqueName: \"kubernetes.io/projected/42c909bc-0493-4de3-882f-c6ebf8967f27-kube-api-access-5dtxc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342534 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342619 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342710 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.342744 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.343102 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.349187 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.349208 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.349601 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.350422 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.350535 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.350646 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.352427 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.352790 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.356289 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.375766 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dtxc\" (UniqueName: \"kubernetes.io/projected/42c909bc-0493-4de3-882f-c6ebf8967f27-kube-api-access-5dtxc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-xzwzp\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:55 crc kubenswrapper[4910]: I0226 22:37:55.478297 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:37:56 crc kubenswrapper[4910]: W0226 22:37:56.119814 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42c909bc_0493_4de3_882f_c6ebf8967f27.slice/crio-9d577283f2d08fa027509477b6ae1bf53cd42640dfbee512268914946cb28e75 WatchSource:0}: Error finding container 9d577283f2d08fa027509477b6ae1bf53cd42640dfbee512268914946cb28e75: Status 404 returned error can't find the container with id 9d577283f2d08fa027509477b6ae1bf53cd42640dfbee512268914946cb28e75 Feb 26 22:37:56 crc kubenswrapper[4910]: I0226 22:37:56.123213 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:37:56 crc kubenswrapper[4910]: I0226 22:37:56.131026 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp"] Feb 26 22:37:57 crc kubenswrapper[4910]: I0226 22:37:57.033810 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" event={"ID":"42c909bc-0493-4de3-882f-c6ebf8967f27","Type":"ContainerStarted","Data":"552dfae4cf496b51b4a3913416b596ebde724bd89596bff69753ccbd8d2eef54"} Feb 26 22:37:57 crc kubenswrapper[4910]: I0226 22:37:57.034500 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" event={"ID":"42c909bc-0493-4de3-882f-c6ebf8967f27","Type":"ContainerStarted","Data":"9d577283f2d08fa027509477b6ae1bf53cd42640dfbee512268914946cb28e75"} Feb 26 22:37:57 crc kubenswrapper[4910]: I0226 22:37:57.063101 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" podStartSLOduration=1.59663854 podStartE2EDuration="2.06307769s" podCreationTimestamp="2026-02-26 22:37:55 +0000 UTC" firstStartedPulling="2026-02-26 22:37:56.122940452 +0000 UTC m=+2561.202431003" lastFinishedPulling="2026-02-26 22:37:56.589379572 +0000 UTC m=+2561.668870153" observedRunningTime="2026-02-26 22:37:57.056450559 +0000 UTC m=+2562.135941130" watchObservedRunningTime="2026-02-26 22:37:57.06307769 +0000 UTC m=+2562.142568241" Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.132507 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535758-rxshz"] Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.136339 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535758-rxshz" Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.139439 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.139459 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.140950 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.147956 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535758-rxshz"] Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.270017 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6bbp\" (UniqueName: \"kubernetes.io/projected/382986d0-2a30-4690-97e4-a25b805cf0e5-kube-api-access-f6bbp\") pod \"auto-csr-approver-29535758-rxshz\" (UID: \"382986d0-2a30-4690-97e4-a25b805cf0e5\") " pod="openshift-infra/auto-csr-approver-29535758-rxshz" Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.372283 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6bbp\" (UniqueName: \"kubernetes.io/projected/382986d0-2a30-4690-97e4-a25b805cf0e5-kube-api-access-f6bbp\") pod \"auto-csr-approver-29535758-rxshz\" (UID: \"382986d0-2a30-4690-97e4-a25b805cf0e5\") " pod="openshift-infra/auto-csr-approver-29535758-rxshz" Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.408390 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6bbp\" (UniqueName: \"kubernetes.io/projected/382986d0-2a30-4690-97e4-a25b805cf0e5-kube-api-access-f6bbp\") pod \"auto-csr-approver-29535758-rxshz\" (UID: \"382986d0-2a30-4690-97e4-a25b805cf0e5\") " pod="openshift-infra/auto-csr-approver-29535758-rxshz" Feb 26 22:38:00 crc kubenswrapper[4910]: I0226 22:38:00.469875 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535758-rxshz" Feb 26 22:38:01 crc kubenswrapper[4910]: I0226 22:38:01.111598 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535758-rxshz"] Feb 26 22:38:01 crc kubenswrapper[4910]: W0226 22:38:01.113075 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod382986d0_2a30_4690_97e4_a25b805cf0e5.slice/crio-464ab322e58850bcd94d6122a3049bd038ac5cdeca5368785f44df700250d5a4 WatchSource:0}: Error finding container 464ab322e58850bcd94d6122a3049bd038ac5cdeca5368785f44df700250d5a4: Status 404 returned error can't find the container with id 464ab322e58850bcd94d6122a3049bd038ac5cdeca5368785f44df700250d5a4 Feb 26 22:38:02 crc kubenswrapper[4910]: I0226 22:38:02.093077 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535758-rxshz" event={"ID":"382986d0-2a30-4690-97e4-a25b805cf0e5","Type":"ContainerStarted","Data":"464ab322e58850bcd94d6122a3049bd038ac5cdeca5368785f44df700250d5a4"} Feb 26 22:38:03 crc kubenswrapper[4910]: I0226 22:38:03.105468 4910 generic.go:334] "Generic (PLEG): container finished" podID="382986d0-2a30-4690-97e4-a25b805cf0e5" containerID="d0d83664fde9de447f2c7a3d3c2793a9716b3995e4159236186c0f4262600411" exitCode=0 Feb 26 22:38:03 crc kubenswrapper[4910]: I0226 22:38:03.105547 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535758-rxshz" event={"ID":"382986d0-2a30-4690-97e4-a25b805cf0e5","Type":"ContainerDied","Data":"d0d83664fde9de447f2c7a3d3c2793a9716b3995e4159236186c0f4262600411"} Feb 26 22:38:03 crc kubenswrapper[4910]: I0226 22:38:03.902440 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:38:03 crc kubenswrapper[4910]: E0226 22:38:03.903398 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:38:04 crc kubenswrapper[4910]: I0226 22:38:04.635415 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535758-rxshz" Feb 26 22:38:04 crc kubenswrapper[4910]: I0226 22:38:04.684576 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6bbp\" (UniqueName: \"kubernetes.io/projected/382986d0-2a30-4690-97e4-a25b805cf0e5-kube-api-access-f6bbp\") pod \"382986d0-2a30-4690-97e4-a25b805cf0e5\" (UID: \"382986d0-2a30-4690-97e4-a25b805cf0e5\") " Feb 26 22:38:04 crc kubenswrapper[4910]: I0226 22:38:04.691876 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382986d0-2a30-4690-97e4-a25b805cf0e5-kube-api-access-f6bbp" (OuterVolumeSpecName: "kube-api-access-f6bbp") pod "382986d0-2a30-4690-97e4-a25b805cf0e5" (UID: "382986d0-2a30-4690-97e4-a25b805cf0e5"). InnerVolumeSpecName "kube-api-access-f6bbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:38:04 crc kubenswrapper[4910]: I0226 22:38:04.787016 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6bbp\" (UniqueName: \"kubernetes.io/projected/382986d0-2a30-4690-97e4-a25b805cf0e5-kube-api-access-f6bbp\") on node \"crc\" DevicePath \"\"" Feb 26 22:38:05 crc kubenswrapper[4910]: I0226 22:38:05.128248 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535758-rxshz" event={"ID":"382986d0-2a30-4690-97e4-a25b805cf0e5","Type":"ContainerDied","Data":"464ab322e58850bcd94d6122a3049bd038ac5cdeca5368785f44df700250d5a4"} Feb 26 22:38:05 crc kubenswrapper[4910]: I0226 22:38:05.128598 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464ab322e58850bcd94d6122a3049bd038ac5cdeca5368785f44df700250d5a4" Feb 26 22:38:05 crc kubenswrapper[4910]: I0226 22:38:05.128397 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535758-rxshz" Feb 26 22:38:05 crc kubenswrapper[4910]: I0226 22:38:05.754487 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535752-85w8v"] Feb 26 22:38:05 crc kubenswrapper[4910]: I0226 22:38:05.773273 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535752-85w8v"] Feb 26 22:38:05 crc kubenswrapper[4910]: I0226 22:38:05.920065 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5cf655-36d4-4e13-a14b-ac89a3ae680d" path="/var/lib/kubelet/pods/4f5cf655-36d4-4e13-a14b-ac89a3ae680d/volumes" Feb 26 22:38:14 crc kubenswrapper[4910]: I0226 22:38:14.902198 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:38:14 crc kubenswrapper[4910]: E0226 22:38:14.905250 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:38:26 crc kubenswrapper[4910]: I0226 22:38:26.901493 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:38:26 crc kubenswrapper[4910]: E0226 22:38:26.902370 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:38:39 crc kubenswrapper[4910]: I0226 22:38:39.033082 4910 scope.go:117] "RemoveContainer" containerID="7f68ccb2e0f00545afcf8e656638440bf442b5ff854ae3e13197746ffb00e5b1" Feb 26 22:38:39 crc kubenswrapper[4910]: I0226 22:38:39.903022 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:38:39 crc kubenswrapper[4910]: E0226 22:38:39.904262 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:38:52 crc kubenswrapper[4910]: I0226 22:38:52.902087 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:38:52 crc kubenswrapper[4910]: E0226 22:38:52.903093 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:39:06 crc kubenswrapper[4910]: I0226 22:39:06.902006 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:39:06 crc kubenswrapper[4910]: E0226 22:39:06.902967 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:39:19 crc kubenswrapper[4910]: I0226 22:39:19.904290 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:39:19 crc kubenswrapper[4910]: E0226 22:39:19.905699 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:39:30 crc kubenswrapper[4910]: I0226 22:39:30.902334 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:39:30 crc kubenswrapper[4910]: E0226 22:39:30.903314 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:39:43 crc kubenswrapper[4910]: I0226 22:39:43.903091 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:39:43 crc kubenswrapper[4910]: E0226 22:39:43.904708 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:39:55 crc kubenswrapper[4910]: I0226 22:39:55.922665 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:39:55 crc kubenswrapper[4910]: E0226 22:39:55.923761 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.151195 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535760-7zt9x"] Feb 26 22:40:00 crc kubenswrapper[4910]: E0226 22:40:00.152593 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382986d0-2a30-4690-97e4-a25b805cf0e5" containerName="oc" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.152618 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="382986d0-2a30-4690-97e4-a25b805cf0e5" containerName="oc" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.153318 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="382986d0-2a30-4690-97e4-a25b805cf0e5" containerName="oc" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.154655 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535760-7zt9x" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.156878 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.157426 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.157780 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.175650 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535760-7zt9x"] Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.356013 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8dwg\" (UniqueName: \"kubernetes.io/projected/9899786c-c81c-416a-bded-79c98b9240fa-kube-api-access-p8dwg\") pod \"auto-csr-approver-29535760-7zt9x\" (UID: \"9899786c-c81c-416a-bded-79c98b9240fa\") " pod="openshift-infra/auto-csr-approver-29535760-7zt9x" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.458724 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8dwg\" (UniqueName: \"kubernetes.io/projected/9899786c-c81c-416a-bded-79c98b9240fa-kube-api-access-p8dwg\") pod \"auto-csr-approver-29535760-7zt9x\" (UID: \"9899786c-c81c-416a-bded-79c98b9240fa\") " pod="openshift-infra/auto-csr-approver-29535760-7zt9x" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.484797 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8dwg\" (UniqueName: \"kubernetes.io/projected/9899786c-c81c-416a-bded-79c98b9240fa-kube-api-access-p8dwg\") pod \"auto-csr-approver-29535760-7zt9x\" (UID: \"9899786c-c81c-416a-bded-79c98b9240fa\") " pod="openshift-infra/auto-csr-approver-29535760-7zt9x" Feb 26 22:40:00 crc kubenswrapper[4910]: I0226 22:40:00.489246 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535760-7zt9x" Feb 26 22:40:01 crc kubenswrapper[4910]: I0226 22:40:01.007305 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535760-7zt9x"] Feb 26 22:40:01 crc kubenswrapper[4910]: I0226 22:40:01.614092 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535760-7zt9x" event={"ID":"9899786c-c81c-416a-bded-79c98b9240fa","Type":"ContainerStarted","Data":"8511a6ff91351c3d47042c2400a10c1384dd1c818850abb71adc19d2c7cfa408"} Feb 26 22:40:03 crc kubenswrapper[4910]: I0226 22:40:03.641734 4910 generic.go:334] "Generic (PLEG): container finished" podID="9899786c-c81c-416a-bded-79c98b9240fa" containerID="feb000216a7779207592fcfdd87ce29b20747033f65306c85d6bf5f5e892ed3d" exitCode=0 Feb 26 22:40:03 crc kubenswrapper[4910]: I0226 22:40:03.641873 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535760-7zt9x" event={"ID":"9899786c-c81c-416a-bded-79c98b9240fa","Type":"ContainerDied","Data":"feb000216a7779207592fcfdd87ce29b20747033f65306c85d6bf5f5e892ed3d"} Feb 26 22:40:05 crc kubenswrapper[4910]: I0226 22:40:05.150422 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535760-7zt9x" Feb 26 22:40:05 crc kubenswrapper[4910]: I0226 22:40:05.194897 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8dwg\" (UniqueName: \"kubernetes.io/projected/9899786c-c81c-416a-bded-79c98b9240fa-kube-api-access-p8dwg\") pod \"9899786c-c81c-416a-bded-79c98b9240fa\" (UID: \"9899786c-c81c-416a-bded-79c98b9240fa\") " Feb 26 22:40:05 crc kubenswrapper[4910]: I0226 22:40:05.208189 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9899786c-c81c-416a-bded-79c98b9240fa-kube-api-access-p8dwg" (OuterVolumeSpecName: "kube-api-access-p8dwg") pod "9899786c-c81c-416a-bded-79c98b9240fa" (UID: "9899786c-c81c-416a-bded-79c98b9240fa"). InnerVolumeSpecName "kube-api-access-p8dwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:40:05 crc kubenswrapper[4910]: I0226 22:40:05.298639 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8dwg\" (UniqueName: \"kubernetes.io/projected/9899786c-c81c-416a-bded-79c98b9240fa-kube-api-access-p8dwg\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:05 crc kubenswrapper[4910]: I0226 22:40:05.667417 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535760-7zt9x" event={"ID":"9899786c-c81c-416a-bded-79c98b9240fa","Type":"ContainerDied","Data":"8511a6ff91351c3d47042c2400a10c1384dd1c818850abb71adc19d2c7cfa408"} Feb 26 22:40:05 crc kubenswrapper[4910]: I0226 22:40:05.667462 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8511a6ff91351c3d47042c2400a10c1384dd1c818850abb71adc19d2c7cfa408" Feb 26 22:40:05 crc kubenswrapper[4910]: I0226 22:40:05.667534 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535760-7zt9x" Feb 26 22:40:06 crc kubenswrapper[4910]: I0226 22:40:06.259312 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535754-gcdbg"] Feb 26 22:40:06 crc kubenswrapper[4910]: I0226 22:40:06.278189 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535754-gcdbg"] Feb 26 22:40:07 crc kubenswrapper[4910]: I0226 22:40:07.916690 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d77142ae-dd7f-443f-8083-a62047e6553e" path="/var/lib/kubelet/pods/d77142ae-dd7f-443f-8083-a62047e6553e/volumes" Feb 26 22:40:08 crc kubenswrapper[4910]: I0226 22:40:08.902825 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:40:08 crc kubenswrapper[4910]: E0226 22:40:08.903330 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:40:20 crc kubenswrapper[4910]: I0226 22:40:20.902610 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:40:20 crc kubenswrapper[4910]: E0226 22:40:20.903464 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:40:30 crc kubenswrapper[4910]: I0226 22:40:30.949486 4910 generic.go:334] "Generic (PLEG): container finished" podID="42c909bc-0493-4de3-882f-c6ebf8967f27" containerID="552dfae4cf496b51b4a3913416b596ebde724bd89596bff69753ccbd8d2eef54" exitCode=0 Feb 26 22:40:30 crc kubenswrapper[4910]: I0226 22:40:30.949603 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" event={"ID":"42c909bc-0493-4de3-882f-c6ebf8967f27","Type":"ContainerDied","Data":"552dfae4cf496b51b4a3913416b596ebde724bd89596bff69753ccbd8d2eef54"} Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.437265 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.586885 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-3\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.586958 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-extra-config-0\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.586988 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-inventory\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.587118 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dtxc\" (UniqueName: \"kubernetes.io/projected/42c909bc-0493-4de3-882f-c6ebf8967f27-kube-api-access-5dtxc\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.587243 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-ssh-key-openstack-edpm-ipam\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.587346 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-1\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.587423 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-2\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.587459 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-0\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.587496 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-1\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.587518 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-combined-ca-bundle\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.587565 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-0\") pod \"42c909bc-0493-4de3-882f-c6ebf8967f27\" (UID: \"42c909bc-0493-4de3-882f-c6ebf8967f27\") " Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.598343 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.600679 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c909bc-0493-4de3-882f-c6ebf8967f27-kube-api-access-5dtxc" (OuterVolumeSpecName: "kube-api-access-5dtxc") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "kube-api-access-5dtxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.620488 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.626936 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-inventory" (OuterVolumeSpecName: "inventory") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.635235 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.635447 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.635409 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.656997 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.663486 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.671270 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.678506 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "42c909bc-0493-4de3-882f-c6ebf8967f27" (UID: "42c909bc-0493-4de3-882f-c6ebf8967f27"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690282 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dtxc\" (UniqueName: \"kubernetes.io/projected/42c909bc-0493-4de3-882f-c6ebf8967f27-kube-api-access-5dtxc\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690319 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690328 4910 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690337 4910 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690347 4910 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690357 4910 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690365 4910 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690373 4910 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690380 4910 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690389 4910 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42c909bc-0493-4de3-882f-c6ebf8967f27-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.690399 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42c909bc-0493-4de3-882f-c6ebf8967f27-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.976575 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" event={"ID":"42c909bc-0493-4de3-882f-c6ebf8967f27","Type":"ContainerDied","Data":"9d577283f2d08fa027509477b6ae1bf53cd42640dfbee512268914946cb28e75"} Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.976624 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d577283f2d08fa027509477b6ae1bf53cd42640dfbee512268914946cb28e75" Feb 26 22:40:32 crc kubenswrapper[4910]: I0226 22:40:32.976692 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-xzwzp" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.096125 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z"] Feb 26 22:40:33 crc kubenswrapper[4910]: E0226 22:40:33.096575 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9899786c-c81c-416a-bded-79c98b9240fa" containerName="oc" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.096591 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="9899786c-c81c-416a-bded-79c98b9240fa" containerName="oc" Feb 26 22:40:33 crc kubenswrapper[4910]: E0226 22:40:33.096610 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c909bc-0493-4de3-882f-c6ebf8967f27" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.096618 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c909bc-0493-4de3-882f-c6ebf8967f27" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.096823 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c909bc-0493-4de3-882f-c6ebf8967f27" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.096839 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="9899786c-c81c-416a-bded-79c98b9240fa" containerName="oc" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.098036 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.101313 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.102104 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.102111 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ktmgl" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.102220 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.102325 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.123123 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z"] Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.200666 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.200732 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.200788 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9d56\" (UniqueName: \"kubernetes.io/projected/975c1c11-dac1-4a07-bd11-3ef32ccf0449-kube-api-access-p9d56\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.200850 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.200910 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.201172 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.201500 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.303637 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.303740 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.303760 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.303782 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9d56\" (UniqueName: \"kubernetes.io/projected/975c1c11-dac1-4a07-bd11-3ef32ccf0449-kube-api-access-p9d56\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.303816 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.303844 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.303877 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.308492 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.313263 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.315047 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.315378 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.316608 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.317839 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.329121 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9d56\" (UniqueName: \"kubernetes.io/projected/975c1c11-dac1-4a07-bd11-3ef32ccf0449-kube-api-access-p9d56\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sj95z\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:33 crc kubenswrapper[4910]: I0226 22:40:33.414589 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:40:34 crc kubenswrapper[4910]: I0226 22:40:34.042277 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z"] Feb 26 22:40:34 crc kubenswrapper[4910]: I0226 22:40:34.901587 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:40:34 crc kubenswrapper[4910]: E0226 22:40:34.902424 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:40:35 crc kubenswrapper[4910]: I0226 22:40:35.006016 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" event={"ID":"975c1c11-dac1-4a07-bd11-3ef32ccf0449","Type":"ContainerStarted","Data":"f4de68457af9e6562d7e52c86dfac03dd4c7fb0cdd45c79df9ff7ffb38a145d8"} Feb 26 22:40:35 crc kubenswrapper[4910]: I0226 22:40:35.006085 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" event={"ID":"975c1c11-dac1-4a07-bd11-3ef32ccf0449","Type":"ContainerStarted","Data":"dc6156a9ffa5b21a172963e116ea14be1f1696bca5eb7990f69a6c3dc38a0fa1"} Feb 26 22:40:35 crc kubenswrapper[4910]: I0226 22:40:35.024331 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" podStartSLOduration=1.569032554 podStartE2EDuration="2.024295909s" podCreationTimestamp="2026-02-26 22:40:33 +0000 UTC" firstStartedPulling="2026-02-26 22:40:34.051678817 +0000 UTC m=+2719.131169398" lastFinishedPulling="2026-02-26 22:40:34.506942172 +0000 UTC m=+2719.586432753" observedRunningTime="2026-02-26 22:40:35.023428236 +0000 UTC m=+2720.102918807" watchObservedRunningTime="2026-02-26 22:40:35.024295909 +0000 UTC m=+2720.103786480" Feb 26 22:40:39 crc kubenswrapper[4910]: I0226 22:40:39.159602 4910 scope.go:117] "RemoveContainer" containerID="e1e8923ab2125b9655ead42d908c0436a59a083c7a6c40b739e0b0e6ee1cd27f" Feb 26 22:40:49 crc kubenswrapper[4910]: I0226 22:40:49.902396 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:40:49 crc kubenswrapper[4910]: E0226 22:40:49.903893 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:41:01 crc kubenswrapper[4910]: I0226 22:41:01.820637 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2v8gm"] Feb 26 22:41:01 crc kubenswrapper[4910]: I0226 22:41:01.823808 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:01 crc kubenswrapper[4910]: I0226 22:41:01.838522 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2v8gm"] Feb 26 22:41:01 crc kubenswrapper[4910]: I0226 22:41:01.901780 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:41:01 crc kubenswrapper[4910]: E0226 22:41:01.902051 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:41:01 crc kubenswrapper[4910]: I0226 22:41:01.914430 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-catalog-content\") pod \"community-operators-2v8gm\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:01 crc kubenswrapper[4910]: I0226 22:41:01.914475 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgzb6\" (UniqueName: \"kubernetes.io/projected/e5ba55f0-f481-4521-b256-8fcbca84c209-kube-api-access-fgzb6\") pod \"community-operators-2v8gm\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:01 crc kubenswrapper[4910]: I0226 22:41:01.914595 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-utilities\") pod \"community-operators-2v8gm\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:02 crc kubenswrapper[4910]: I0226 22:41:02.016957 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-utilities\") pod \"community-operators-2v8gm\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:02 crc kubenswrapper[4910]: I0226 22:41:02.017109 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-catalog-content\") pod \"community-operators-2v8gm\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:02 crc kubenswrapper[4910]: I0226 22:41:02.017133 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgzb6\" (UniqueName: \"kubernetes.io/projected/e5ba55f0-f481-4521-b256-8fcbca84c209-kube-api-access-fgzb6\") pod \"community-operators-2v8gm\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:02 crc kubenswrapper[4910]: I0226 22:41:02.017676 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-catalog-content\") pod \"community-operators-2v8gm\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:02 crc kubenswrapper[4910]: I0226 22:41:02.018075 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-utilities\") pod \"community-operators-2v8gm\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:02 crc kubenswrapper[4910]: I0226 22:41:02.063072 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgzb6\" (UniqueName: \"kubernetes.io/projected/e5ba55f0-f481-4521-b256-8fcbca84c209-kube-api-access-fgzb6\") pod \"community-operators-2v8gm\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:02 crc kubenswrapper[4910]: I0226 22:41:02.140145 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:02 crc kubenswrapper[4910]: I0226 22:41:02.611706 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2v8gm"] Feb 26 22:41:03 crc kubenswrapper[4910]: I0226 22:41:03.351721 4910 generic.go:334] "Generic (PLEG): container finished" podID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerID="6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a" exitCode=0 Feb 26 22:41:03 crc kubenswrapper[4910]: I0226 22:41:03.351767 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v8gm" event={"ID":"e5ba55f0-f481-4521-b256-8fcbca84c209","Type":"ContainerDied","Data":"6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a"} Feb 26 22:41:03 crc kubenswrapper[4910]: I0226 22:41:03.351979 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v8gm" event={"ID":"e5ba55f0-f481-4521-b256-8fcbca84c209","Type":"ContainerStarted","Data":"a1fbae27a6aec21110468f3f0e5d3295d3d0313ce4cdea2f3e29619c676e27e4"} Feb 26 22:41:04 crc kubenswrapper[4910]: I0226 22:41:04.366334 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v8gm" event={"ID":"e5ba55f0-f481-4521-b256-8fcbca84c209","Type":"ContainerStarted","Data":"870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45"} Feb 26 22:41:06 crc kubenswrapper[4910]: I0226 22:41:06.387639 4910 generic.go:334] "Generic (PLEG): container finished" podID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerID="870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45" exitCode=0 Feb 26 22:41:06 crc kubenswrapper[4910]: I0226 22:41:06.387755 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v8gm" event={"ID":"e5ba55f0-f481-4521-b256-8fcbca84c209","Type":"ContainerDied","Data":"870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45"} Feb 26 22:41:07 crc kubenswrapper[4910]: I0226 22:41:07.412669 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v8gm" event={"ID":"e5ba55f0-f481-4521-b256-8fcbca84c209","Type":"ContainerStarted","Data":"fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7"} Feb 26 22:41:07 crc kubenswrapper[4910]: I0226 22:41:07.457098 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2v8gm" podStartSLOduration=3.001137432 podStartE2EDuration="6.457071041s" podCreationTimestamp="2026-02-26 22:41:01 +0000 UTC" firstStartedPulling="2026-02-26 22:41:03.353903917 +0000 UTC m=+2748.433394458" lastFinishedPulling="2026-02-26 22:41:06.809837516 +0000 UTC m=+2751.889328067" observedRunningTime="2026-02-26 22:41:07.443557454 +0000 UTC m=+2752.523048035" watchObservedRunningTime="2026-02-26 22:41:07.457071041 +0000 UTC m=+2752.536561612" Feb 26 22:41:12 crc kubenswrapper[4910]: I0226 22:41:12.141417 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:12 crc kubenswrapper[4910]: I0226 22:41:12.141996 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:12 crc kubenswrapper[4910]: I0226 22:41:12.187662 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:12 crc kubenswrapper[4910]: I0226 22:41:12.526088 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:12 crc kubenswrapper[4910]: I0226 22:41:12.588913 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2v8gm"] Feb 26 22:41:14 crc kubenswrapper[4910]: I0226 22:41:14.490898 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2v8gm" podUID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerName="registry-server" containerID="cri-o://fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7" gracePeriod=2 Feb 26 22:41:14 crc kubenswrapper[4910]: I0226 22:41:14.901851 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:41:14 crc kubenswrapper[4910]: E0226 22:41:14.902645 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.113554 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.124722 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-catalog-content\") pod \"e5ba55f0-f481-4521-b256-8fcbca84c209\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.124916 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-utilities\") pod \"e5ba55f0-f481-4521-b256-8fcbca84c209\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.124971 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgzb6\" (UniqueName: \"kubernetes.io/projected/e5ba55f0-f481-4521-b256-8fcbca84c209-kube-api-access-fgzb6\") pod \"e5ba55f0-f481-4521-b256-8fcbca84c209\" (UID: \"e5ba55f0-f481-4521-b256-8fcbca84c209\") " Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.126586 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-utilities" (OuterVolumeSpecName: "utilities") pod "e5ba55f0-f481-4521-b256-8fcbca84c209" (UID: "e5ba55f0-f481-4521-b256-8fcbca84c209"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.132451 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ba55f0-f481-4521-b256-8fcbca84c209-kube-api-access-fgzb6" (OuterVolumeSpecName: "kube-api-access-fgzb6") pod "e5ba55f0-f481-4521-b256-8fcbca84c209" (UID: "e5ba55f0-f481-4521-b256-8fcbca84c209"). InnerVolumeSpecName "kube-api-access-fgzb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.204414 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5ba55f0-f481-4521-b256-8fcbca84c209" (UID: "e5ba55f0-f481-4521-b256-8fcbca84c209"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.229770 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.229804 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ba55f0-f481-4521-b256-8fcbca84c209-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.229818 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgzb6\" (UniqueName: \"kubernetes.io/projected/e5ba55f0-f481-4521-b256-8fcbca84c209-kube-api-access-fgzb6\") on node \"crc\" DevicePath \"\"" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.505034 4910 generic.go:334] "Generic (PLEG): container finished" podID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerID="fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7" exitCode=0 Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.505101 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v8gm" event={"ID":"e5ba55f0-f481-4521-b256-8fcbca84c209","Type":"ContainerDied","Data":"fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7"} Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.505118 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v8gm" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.505203 4910 scope.go:117] "RemoveContainer" containerID="fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.505187 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v8gm" event={"ID":"e5ba55f0-f481-4521-b256-8fcbca84c209","Type":"ContainerDied","Data":"a1fbae27a6aec21110468f3f0e5d3295d3d0313ce4cdea2f3e29619c676e27e4"} Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.528548 4910 scope.go:117] "RemoveContainer" containerID="870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.576598 4910 scope.go:117] "RemoveContainer" containerID="6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.580511 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2v8gm"] Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.590978 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2v8gm"] Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.625054 4910 scope.go:117] "RemoveContainer" containerID="fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7" Feb 26 22:41:15 crc kubenswrapper[4910]: E0226 22:41:15.625754 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7\": container with ID starting with fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7 not found: ID does not exist" containerID="fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.625872 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7"} err="failed to get container status \"fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7\": rpc error: code = NotFound desc = could not find container \"fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7\": container with ID starting with fd49d2c9893d4e94fe110b0748c6a072b30db684c93c78e3b03a3de976518af7 not found: ID does not exist" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.625958 4910 scope.go:117] "RemoveContainer" containerID="870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45" Feb 26 22:41:15 crc kubenswrapper[4910]: E0226 22:41:15.626431 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45\": container with ID starting with 870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45 not found: ID does not exist" containerID="870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.626486 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45"} err="failed to get container status \"870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45\": rpc error: code = NotFound desc = could not find container \"870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45\": container with ID starting with 870f3c3ecb7eb98539d354a74eefb6834ff3f3fac5f602e99abe2757d91d6d45 not found: ID does not exist" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.626510 4910 scope.go:117] "RemoveContainer" containerID="6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a" Feb 26 22:41:15 crc kubenswrapper[4910]: E0226 22:41:15.626739 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a\": container with ID starting with 6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a not found: ID does not exist" containerID="6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.626838 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a"} err="failed to get container status \"6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a\": rpc error: code = NotFound desc = could not find container \"6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a\": container with ID starting with 6b0078dcb61f5844f5372ee016402f9e1fe781dc1823afd8b4b99ffe1263df3a not found: ID does not exist" Feb 26 22:41:15 crc kubenswrapper[4910]: I0226 22:41:15.925447 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ba55f0-f481-4521-b256-8fcbca84c209" path="/var/lib/kubelet/pods/e5ba55f0-f481-4521-b256-8fcbca84c209/volumes" Feb 26 22:41:28 crc kubenswrapper[4910]: I0226 22:41:28.901443 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:41:28 crc kubenswrapper[4910]: E0226 22:41:28.904328 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:41:39 crc kubenswrapper[4910]: I0226 22:41:39.901631 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:41:39 crc kubenswrapper[4910]: E0226 22:41:39.902629 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:41:51 crc kubenswrapper[4910]: I0226 22:41:51.904280 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:41:51 crc kubenswrapper[4910]: E0226 22:41:51.908116 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.145001 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535762-ghrnr"] Feb 26 22:42:00 crc kubenswrapper[4910]: E0226 22:42:00.145831 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerName="extract-utilities" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.145844 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerName="extract-utilities" Feb 26 22:42:00 crc kubenswrapper[4910]: E0226 22:42:00.145859 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerName="extract-content" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.145867 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerName="extract-content" Feb 26 22:42:00 crc kubenswrapper[4910]: E0226 22:42:00.145892 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerName="registry-server" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.145899 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerName="registry-server" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.146112 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ba55f0-f481-4521-b256-8fcbca84c209" containerName="registry-server" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.146844 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535762-ghrnr" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.148705 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.149213 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.150101 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.156194 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535762-ghrnr"] Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.208347 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dx9t\" (UniqueName: \"kubernetes.io/projected/ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c-kube-api-access-6dx9t\") pod \"auto-csr-approver-29535762-ghrnr\" (UID: \"ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c\") " pod="openshift-infra/auto-csr-approver-29535762-ghrnr" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.310450 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dx9t\" (UniqueName: \"kubernetes.io/projected/ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c-kube-api-access-6dx9t\") pod \"auto-csr-approver-29535762-ghrnr\" (UID: \"ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c\") " pod="openshift-infra/auto-csr-approver-29535762-ghrnr" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.331206 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dx9t\" (UniqueName: \"kubernetes.io/projected/ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c-kube-api-access-6dx9t\") pod \"auto-csr-approver-29535762-ghrnr\" (UID: \"ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c\") " pod="openshift-infra/auto-csr-approver-29535762-ghrnr" Feb 26 22:42:00 crc kubenswrapper[4910]: I0226 22:42:00.466349 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535762-ghrnr" Feb 26 22:42:01 crc kubenswrapper[4910]: I0226 22:42:01.002706 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535762-ghrnr"] Feb 26 22:42:01 crc kubenswrapper[4910]: W0226 22:42:01.008342 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba3cf2a4_0301_4aa1_99d3_dbd68d5b522c.slice/crio-63a65da9c457f02db068c4380ece3e73098dbacb43e196bf66af0a302999afbd WatchSource:0}: Error finding container 63a65da9c457f02db068c4380ece3e73098dbacb43e196bf66af0a302999afbd: Status 404 returned error can't find the container with id 63a65da9c457f02db068c4380ece3e73098dbacb43e196bf66af0a302999afbd Feb 26 22:42:01 crc kubenswrapper[4910]: I0226 22:42:01.038456 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535762-ghrnr" event={"ID":"ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c","Type":"ContainerStarted","Data":"63a65da9c457f02db068c4380ece3e73098dbacb43e196bf66af0a302999afbd"} Feb 26 22:42:03 crc kubenswrapper[4910]: I0226 22:42:03.058574 4910 generic.go:334] "Generic (PLEG): container finished" podID="ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c" containerID="b19499347bd724161b1c628b04f58854267f581744d29bf70b890c422ba36760" exitCode=0 Feb 26 22:42:03 crc kubenswrapper[4910]: I0226 22:42:03.058639 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535762-ghrnr" event={"ID":"ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c","Type":"ContainerDied","Data":"b19499347bd724161b1c628b04f58854267f581744d29bf70b890c422ba36760"} Feb 26 22:42:04 crc kubenswrapper[4910]: I0226 22:42:04.617441 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535762-ghrnr" Feb 26 22:42:04 crc kubenswrapper[4910]: I0226 22:42:04.722690 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dx9t\" (UniqueName: \"kubernetes.io/projected/ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c-kube-api-access-6dx9t\") pod \"ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c\" (UID: \"ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c\") " Feb 26 22:42:04 crc kubenswrapper[4910]: I0226 22:42:04.731112 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c-kube-api-access-6dx9t" (OuterVolumeSpecName: "kube-api-access-6dx9t") pod "ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c" (UID: "ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c"). InnerVolumeSpecName "kube-api-access-6dx9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:42:04 crc kubenswrapper[4910]: I0226 22:42:04.847404 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dx9t\" (UniqueName: \"kubernetes.io/projected/ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c-kube-api-access-6dx9t\") on node \"crc\" DevicePath \"\"" Feb 26 22:42:05 crc kubenswrapper[4910]: I0226 22:42:05.085751 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535762-ghrnr" event={"ID":"ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c","Type":"ContainerDied","Data":"63a65da9c457f02db068c4380ece3e73098dbacb43e196bf66af0a302999afbd"} Feb 26 22:42:05 crc kubenswrapper[4910]: I0226 22:42:05.086229 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63a65da9c457f02db068c4380ece3e73098dbacb43e196bf66af0a302999afbd" Feb 26 22:42:05 crc kubenswrapper[4910]: I0226 22:42:05.085849 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535762-ghrnr" Feb 26 22:42:05 crc kubenswrapper[4910]: I0226 22:42:05.753932 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535756-h2hl8"] Feb 26 22:42:05 crc kubenswrapper[4910]: I0226 22:42:05.770735 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535756-h2hl8"] Feb 26 22:42:05 crc kubenswrapper[4910]: I0226 22:42:05.917870 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc249d68-ea45-4301-b508-572a616bbb87" path="/var/lib/kubelet/pods/bc249d68-ea45-4301-b508-572a616bbb87/volumes" Feb 26 22:42:06 crc kubenswrapper[4910]: I0226 22:42:06.902313 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:42:08 crc kubenswrapper[4910]: I0226 22:42:08.121095 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"bf4404a57b9e158f4c76a2539469a0f43575a0b95fd644e44dec95e0304a9ede"} Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.585837 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9llw"] Feb 26 22:42:11 crc kubenswrapper[4910]: E0226 22:42:11.586817 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c" containerName="oc" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.586830 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c" containerName="oc" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.587038 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c" containerName="oc" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.588552 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.595025 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9llw"] Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.734061 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-catalog-content\") pod \"redhat-operators-f9llw\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.734415 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-utilities\") pod \"redhat-operators-f9llw\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.734681 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqr2m\" (UniqueName: \"kubernetes.io/projected/6819cb51-1432-42c1-a197-97d6f52e7d96-kube-api-access-vqr2m\") pod \"redhat-operators-f9llw\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.836473 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-utilities\") pod \"redhat-operators-f9llw\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.836749 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqr2m\" (UniqueName: \"kubernetes.io/projected/6819cb51-1432-42c1-a197-97d6f52e7d96-kube-api-access-vqr2m\") pod \"redhat-operators-f9llw\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.836819 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-catalog-content\") pod \"redhat-operators-f9llw\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.837025 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-utilities\") pod \"redhat-operators-f9llw\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.837329 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-catalog-content\") pod \"redhat-operators-f9llw\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.859226 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqr2m\" (UniqueName: \"kubernetes.io/projected/6819cb51-1432-42c1-a197-97d6f52e7d96-kube-api-access-vqr2m\") pod \"redhat-operators-f9llw\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:11 crc kubenswrapper[4910]: I0226 22:42:11.916617 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:12 crc kubenswrapper[4910]: I0226 22:42:12.423557 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9llw"] Feb 26 22:42:13 crc kubenswrapper[4910]: I0226 22:42:13.185111 4910 generic.go:334] "Generic (PLEG): container finished" podID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerID="596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6" exitCode=0 Feb 26 22:42:13 crc kubenswrapper[4910]: I0226 22:42:13.185332 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9llw" event={"ID":"6819cb51-1432-42c1-a197-97d6f52e7d96","Type":"ContainerDied","Data":"596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6"} Feb 26 22:42:13 crc kubenswrapper[4910]: I0226 22:42:13.185360 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9llw" event={"ID":"6819cb51-1432-42c1-a197-97d6f52e7d96","Type":"ContainerStarted","Data":"12d31a0d8a5fdc2668ecac28d42d6fbb5cd7e39f1a03ceba08cee62ccbaee7f2"} Feb 26 22:42:14 crc kubenswrapper[4910]: I0226 22:42:14.196308 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9llw" event={"ID":"6819cb51-1432-42c1-a197-97d6f52e7d96","Type":"ContainerStarted","Data":"81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5"} Feb 26 22:42:19 crc kubenswrapper[4910]: I0226 22:42:19.263937 4910 generic.go:334] "Generic (PLEG): container finished" podID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerID="81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5" exitCode=0 Feb 26 22:42:19 crc kubenswrapper[4910]: I0226 22:42:19.264055 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9llw" event={"ID":"6819cb51-1432-42c1-a197-97d6f52e7d96","Type":"ContainerDied","Data":"81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5"} Feb 26 22:42:20 crc kubenswrapper[4910]: I0226 22:42:20.276991 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9llw" event={"ID":"6819cb51-1432-42c1-a197-97d6f52e7d96","Type":"ContainerStarted","Data":"90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28"} Feb 26 22:42:20 crc kubenswrapper[4910]: I0226 22:42:20.296363 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9llw" podStartSLOduration=2.764685433 podStartE2EDuration="9.2963471s" podCreationTimestamp="2026-02-26 22:42:11 +0000 UTC" firstStartedPulling="2026-02-26 22:42:13.187057043 +0000 UTC m=+2818.266547584" lastFinishedPulling="2026-02-26 22:42:19.7187187 +0000 UTC m=+2824.798209251" observedRunningTime="2026-02-26 22:42:20.29379009 +0000 UTC m=+2825.373280631" watchObservedRunningTime="2026-02-26 22:42:20.2963471 +0000 UTC m=+2825.375837651" Feb 26 22:42:21 crc kubenswrapper[4910]: I0226 22:42:21.923830 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:21 crc kubenswrapper[4910]: I0226 22:42:21.924590 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:22 crc kubenswrapper[4910]: I0226 22:42:22.973296 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9llw" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="registry-server" probeResult="failure" output=< Feb 26 22:42:22 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:42:22 crc kubenswrapper[4910]: > Feb 26 22:42:32 crc kubenswrapper[4910]: I0226 22:42:32.967607 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9llw" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="registry-server" probeResult="failure" output=< Feb 26 22:42:32 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:42:32 crc kubenswrapper[4910]: > Feb 26 22:42:39 crc kubenswrapper[4910]: I0226 22:42:39.291798 4910 scope.go:117] "RemoveContainer" containerID="60b9772649900bd27d317dc22a8c85300531bcc056148c658674e31280d52edf" Feb 26 22:42:41 crc kubenswrapper[4910]: I0226 22:42:41.975880 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:42 crc kubenswrapper[4910]: I0226 22:42:42.039381 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:42 crc kubenswrapper[4910]: I0226 22:42:42.799998 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9llw"] Feb 26 22:42:43 crc kubenswrapper[4910]: I0226 22:42:43.598136 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9llw" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="registry-server" containerID="cri-o://90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28" gracePeriod=2 Feb 26 22:42:43 crc kubenswrapper[4910]: W0226 22:42:43.704995 4910 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6819cb51_1432_42c1_a197_97d6f52e7d96.slice/crio-12d31a0d8a5fdc2668ecac28d42d6fbb5cd7e39f1a03ceba08cee62ccbaee7f2": error while statting cgroup v2: [unable to parse /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6819cb51_1432_42c1_a197_97d6f52e7d96.slice/crio-12d31a0d8a5fdc2668ecac28d42d6fbb5cd7e39f1a03ceba08cee62ccbaee7f2/memory.stat: read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6819cb51_1432_42c1_a197_97d6f52e7d96.slice/crio-12d31a0d8a5fdc2668ecac28d42d6fbb5cd7e39f1a03ceba08cee62ccbaee7f2/memory.stat: no such device], continuing to push stats Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.201677 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.302894 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-catalog-content\") pod \"6819cb51-1432-42c1-a197-97d6f52e7d96\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.302953 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqr2m\" (UniqueName: \"kubernetes.io/projected/6819cb51-1432-42c1-a197-97d6f52e7d96-kube-api-access-vqr2m\") pod \"6819cb51-1432-42c1-a197-97d6f52e7d96\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.303203 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-utilities\") pod \"6819cb51-1432-42c1-a197-97d6f52e7d96\" (UID: \"6819cb51-1432-42c1-a197-97d6f52e7d96\") " Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.303738 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-utilities" (OuterVolumeSpecName: "utilities") pod "6819cb51-1432-42c1-a197-97d6f52e7d96" (UID: "6819cb51-1432-42c1-a197-97d6f52e7d96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.309009 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6819cb51-1432-42c1-a197-97d6f52e7d96-kube-api-access-vqr2m" (OuterVolumeSpecName: "kube-api-access-vqr2m") pod "6819cb51-1432-42c1-a197-97d6f52e7d96" (UID: "6819cb51-1432-42c1-a197-97d6f52e7d96"). InnerVolumeSpecName "kube-api-access-vqr2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.405040 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqr2m\" (UniqueName: \"kubernetes.io/projected/6819cb51-1432-42c1-a197-97d6f52e7d96-kube-api-access-vqr2m\") on node \"crc\" DevicePath \"\"" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.405343 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.439484 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6819cb51-1432-42c1-a197-97d6f52e7d96" (UID: "6819cb51-1432-42c1-a197-97d6f52e7d96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.506958 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6819cb51-1432-42c1-a197-97d6f52e7d96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.611055 4910 generic.go:334] "Generic (PLEG): container finished" podID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerID="90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28" exitCode=0 Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.611094 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9llw" event={"ID":"6819cb51-1432-42c1-a197-97d6f52e7d96","Type":"ContainerDied","Data":"90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28"} Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.611126 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9llw" event={"ID":"6819cb51-1432-42c1-a197-97d6f52e7d96","Type":"ContainerDied","Data":"12d31a0d8a5fdc2668ecac28d42d6fbb5cd7e39f1a03ceba08cee62ccbaee7f2"} Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.611144 4910 scope.go:117] "RemoveContainer" containerID="90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.611155 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9llw" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.645525 4910 scope.go:117] "RemoveContainer" containerID="81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.658923 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9llw"] Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.674035 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9llw"] Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.691227 4910 scope.go:117] "RemoveContainer" containerID="596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.739676 4910 scope.go:117] "RemoveContainer" containerID="90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28" Feb 26 22:42:44 crc kubenswrapper[4910]: E0226 22:42:44.746811 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28\": container with ID starting with 90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28 not found: ID does not exist" containerID="90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.746875 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28"} err="failed to get container status \"90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28\": rpc error: code = NotFound desc = could not find container \"90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28\": container with ID starting with 90800e2e2be871fe52f8102a8ea885d1460de205001c6235f73b786461565f28 not found: ID does not exist" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.746925 4910 scope.go:117] "RemoveContainer" containerID="81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5" Feb 26 22:42:44 crc kubenswrapper[4910]: E0226 22:42:44.747341 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5\": container with ID starting with 81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5 not found: ID does not exist" containerID="81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.747364 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5"} err="failed to get container status \"81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5\": rpc error: code = NotFound desc = could not find container \"81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5\": container with ID starting with 81a0624dec58c5b10c5dc1e684b13762bfa7775dc54967a5c4781cca23eb11a5 not found: ID does not exist" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.747377 4910 scope.go:117] "RemoveContainer" containerID="596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6" Feb 26 22:42:44 crc kubenswrapper[4910]: E0226 22:42:44.747687 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6\": container with ID starting with 596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6 not found: ID does not exist" containerID="596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6" Feb 26 22:42:44 crc kubenswrapper[4910]: I0226 22:42:44.747713 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6"} err="failed to get container status \"596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6\": rpc error: code = NotFound desc = could not find container \"596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6\": container with ID starting with 596242aecca082e5ddc0005da76d0be2a9ddcc562c3394be3d3d9727e7bf5cc6 not found: ID does not exist" Feb 26 22:42:45 crc kubenswrapper[4910]: I0226 22:42:45.951109 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" path="/var/lib/kubelet/pods/6819cb51-1432-42c1-a197-97d6f52e7d96/volumes" Feb 26 22:43:13 crc kubenswrapper[4910]: I0226 22:43:13.957015 4910 generic.go:334] "Generic (PLEG): container finished" podID="975c1c11-dac1-4a07-bd11-3ef32ccf0449" containerID="f4de68457af9e6562d7e52c86dfac03dd4c7fb0cdd45c79df9ff7ffb38a145d8" exitCode=0 Feb 26 22:43:13 crc kubenswrapper[4910]: I0226 22:43:13.957121 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" event={"ID":"975c1c11-dac1-4a07-bd11-3ef32ccf0449","Type":"ContainerDied","Data":"f4de68457af9e6562d7e52c86dfac03dd4c7fb0cdd45c79df9ff7ffb38a145d8"} Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.498322 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.592250 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9d56\" (UniqueName: \"kubernetes.io/projected/975c1c11-dac1-4a07-bd11-3ef32ccf0449-kube-api-access-p9d56\") pod \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.592389 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-telemetry-combined-ca-bundle\") pod \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.592429 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-inventory\") pod \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.592562 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-1\") pod \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.592629 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ssh-key-openstack-edpm-ipam\") pod \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.592778 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-2\") pod \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.592845 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-0\") pod \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\" (UID: \"975c1c11-dac1-4a07-bd11-3ef32ccf0449\") " Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.603507 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "975c1c11-dac1-4a07-bd11-3ef32ccf0449" (UID: "975c1c11-dac1-4a07-bd11-3ef32ccf0449"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.618959 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975c1c11-dac1-4a07-bd11-3ef32ccf0449-kube-api-access-p9d56" (OuterVolumeSpecName: "kube-api-access-p9d56") pod "975c1c11-dac1-4a07-bd11-3ef32ccf0449" (UID: "975c1c11-dac1-4a07-bd11-3ef32ccf0449"). InnerVolumeSpecName "kube-api-access-p9d56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.624570 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "975c1c11-dac1-4a07-bd11-3ef32ccf0449" (UID: "975c1c11-dac1-4a07-bd11-3ef32ccf0449"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.628343 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "975c1c11-dac1-4a07-bd11-3ef32ccf0449" (UID: "975c1c11-dac1-4a07-bd11-3ef32ccf0449"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.636506 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "975c1c11-dac1-4a07-bd11-3ef32ccf0449" (UID: "975c1c11-dac1-4a07-bd11-3ef32ccf0449"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.656055 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-inventory" (OuterVolumeSpecName: "inventory") pod "975c1c11-dac1-4a07-bd11-3ef32ccf0449" (UID: "975c1c11-dac1-4a07-bd11-3ef32ccf0449"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.656605 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "975c1c11-dac1-4a07-bd11-3ef32ccf0449" (UID: "975c1c11-dac1-4a07-bd11-3ef32ccf0449"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.696073 4910 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.696114 4910 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.696136 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9d56\" (UniqueName: \"kubernetes.io/projected/975c1c11-dac1-4a07-bd11-3ef32ccf0449-kube-api-access-p9d56\") on node \"crc\" DevicePath \"\"" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.696189 4910 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.696210 4910 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.696228 4910 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.696247 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/975c1c11-dac1-4a07-bd11-3ef32ccf0449-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.985726 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" event={"ID":"975c1c11-dac1-4a07-bd11-3ef32ccf0449","Type":"ContainerDied","Data":"dc6156a9ffa5b21a172963e116ea14be1f1696bca5eb7990f69a6c3dc38a0fa1"} Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.985790 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc6156a9ffa5b21a172963e116ea14be1f1696bca5eb7990f69a6c3dc38a0fa1" Feb 26 22:43:15 crc kubenswrapper[4910]: I0226 22:43:15.985832 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sj95z" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.168615 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535764-vrnn8"] Feb 26 22:44:00 crc kubenswrapper[4910]: E0226 22:44:00.169851 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975c1c11-dac1-4a07-bd11-3ef32ccf0449" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.169875 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="975c1c11-dac1-4a07-bd11-3ef32ccf0449" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 22:44:00 crc kubenswrapper[4910]: E0226 22:44:00.169897 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="extract-content" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.169909 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="extract-content" Feb 26 22:44:00 crc kubenswrapper[4910]: E0226 22:44:00.169938 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="extract-utilities" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.169951 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="extract-utilities" Feb 26 22:44:00 crc kubenswrapper[4910]: E0226 22:44:00.169993 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="registry-server" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.170005 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="registry-server" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.170409 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="6819cb51-1432-42c1-a197-97d6f52e7d96" containerName="registry-server" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.170438 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="975c1c11-dac1-4a07-bd11-3ef32ccf0449" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.171621 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535764-vrnn8" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.174678 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.177606 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.177830 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.184895 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535764-vrnn8"] Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.270026 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllfv\" (UniqueName: \"kubernetes.io/projected/4875b3ff-72a4-4075-9d13-210476d09ddf-kube-api-access-jllfv\") pod \"auto-csr-approver-29535764-vrnn8\" (UID: \"4875b3ff-72a4-4075-9d13-210476d09ddf\") " pod="openshift-infra/auto-csr-approver-29535764-vrnn8" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.372889 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllfv\" (UniqueName: \"kubernetes.io/projected/4875b3ff-72a4-4075-9d13-210476d09ddf-kube-api-access-jllfv\") pod \"auto-csr-approver-29535764-vrnn8\" (UID: \"4875b3ff-72a4-4075-9d13-210476d09ddf\") " pod="openshift-infra/auto-csr-approver-29535764-vrnn8" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.393933 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllfv\" (UniqueName: \"kubernetes.io/projected/4875b3ff-72a4-4075-9d13-210476d09ddf-kube-api-access-jllfv\") pod \"auto-csr-approver-29535764-vrnn8\" (UID: \"4875b3ff-72a4-4075-9d13-210476d09ddf\") " pod="openshift-infra/auto-csr-approver-29535764-vrnn8" Feb 26 22:44:00 crc kubenswrapper[4910]: I0226 22:44:00.547771 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535764-vrnn8" Feb 26 22:44:01 crc kubenswrapper[4910]: I0226 22:44:01.010779 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535764-vrnn8"] Feb 26 22:44:01 crc kubenswrapper[4910]: I0226 22:44:01.012619 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:44:01 crc kubenswrapper[4910]: I0226 22:44:01.786443 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535764-vrnn8" event={"ID":"4875b3ff-72a4-4075-9d13-210476d09ddf","Type":"ContainerStarted","Data":"4227e24b340fe21dd24db18f4efef9d1e3f3b5d149d4154fdd1bf407f9fa5536"} Feb 26 22:44:02 crc kubenswrapper[4910]: I0226 22:44:02.801920 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535764-vrnn8" event={"ID":"4875b3ff-72a4-4075-9d13-210476d09ddf","Type":"ContainerStarted","Data":"25325538696b793c6b180155afd16dfb0b613eb93958ab162a8b4bfb3bb52f0b"} Feb 26 22:44:03 crc kubenswrapper[4910]: I0226 22:44:03.816563 4910 generic.go:334] "Generic (PLEG): container finished" podID="4875b3ff-72a4-4075-9d13-210476d09ddf" containerID="25325538696b793c6b180155afd16dfb0b613eb93958ab162a8b4bfb3bb52f0b" exitCode=0 Feb 26 22:44:03 crc kubenswrapper[4910]: I0226 22:44:03.816644 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535764-vrnn8" event={"ID":"4875b3ff-72a4-4075-9d13-210476d09ddf","Type":"ContainerDied","Data":"25325538696b793c6b180155afd16dfb0b613eb93958ab162a8b4bfb3bb52f0b"} Feb 26 22:44:04 crc kubenswrapper[4910]: I0226 22:44:04.402472 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535764-vrnn8" Feb 26 22:44:04 crc kubenswrapper[4910]: I0226 22:44:04.515200 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jllfv\" (UniqueName: \"kubernetes.io/projected/4875b3ff-72a4-4075-9d13-210476d09ddf-kube-api-access-jllfv\") pod \"4875b3ff-72a4-4075-9d13-210476d09ddf\" (UID: \"4875b3ff-72a4-4075-9d13-210476d09ddf\") " Feb 26 22:44:04 crc kubenswrapper[4910]: I0226 22:44:04.523922 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4875b3ff-72a4-4075-9d13-210476d09ddf-kube-api-access-jllfv" (OuterVolumeSpecName: "kube-api-access-jllfv") pod "4875b3ff-72a4-4075-9d13-210476d09ddf" (UID: "4875b3ff-72a4-4075-9d13-210476d09ddf"). InnerVolumeSpecName "kube-api-access-jllfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:44:04 crc kubenswrapper[4910]: I0226 22:44:04.619021 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jllfv\" (UniqueName: \"kubernetes.io/projected/4875b3ff-72a4-4075-9d13-210476d09ddf-kube-api-access-jllfv\") on node \"crc\" DevicePath \"\"" Feb 26 22:44:04 crc kubenswrapper[4910]: I0226 22:44:04.829689 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535764-vrnn8" event={"ID":"4875b3ff-72a4-4075-9d13-210476d09ddf","Type":"ContainerDied","Data":"4227e24b340fe21dd24db18f4efef9d1e3f3b5d149d4154fdd1bf407f9fa5536"} Feb 26 22:44:04 crc kubenswrapper[4910]: I0226 22:44:04.829739 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4227e24b340fe21dd24db18f4efef9d1e3f3b5d149d4154fdd1bf407f9fa5536" Feb 26 22:44:04 crc kubenswrapper[4910]: I0226 22:44:04.829803 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535764-vrnn8" Feb 26 22:44:05 crc kubenswrapper[4910]: I0226 22:44:05.481684 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535758-rxshz"] Feb 26 22:44:05 crc kubenswrapper[4910]: I0226 22:44:05.497644 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535758-rxshz"] Feb 26 22:44:05 crc kubenswrapper[4910]: I0226 22:44:05.922337 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="382986d0-2a30-4690-97e4-a25b805cf0e5" path="/var/lib/kubelet/pods/382986d0-2a30-4690-97e4-a25b805cf0e5/volumes" Feb 26 22:44:25 crc kubenswrapper[4910]: I0226 22:44:25.727558 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:44:25 crc kubenswrapper[4910]: I0226 22:44:25.729184 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:44:39 crc kubenswrapper[4910]: I0226 22:44:39.431651 4910 scope.go:117] "RemoveContainer" containerID="d0d83664fde9de447f2c7a3d3c2793a9716b3995e4159236186c0f4262600411" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.656295 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vg6wm"] Feb 26 22:44:55 crc kubenswrapper[4910]: E0226 22:44:55.657559 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4875b3ff-72a4-4075-9d13-210476d09ddf" containerName="oc" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.657581 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4875b3ff-72a4-4075-9d13-210476d09ddf" containerName="oc" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.658190 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="4875b3ff-72a4-4075-9d13-210476d09ddf" containerName="oc" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.661232 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.670342 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vg6wm"] Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.727071 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.727335 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.816287 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-catalog-content\") pod \"certified-operators-vg6wm\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.816440 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-utilities\") pod \"certified-operators-vg6wm\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.816490 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg8sd\" (UniqueName: \"kubernetes.io/projected/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-kube-api-access-rg8sd\") pod \"certified-operators-vg6wm\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.918453 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-catalog-content\") pod \"certified-operators-vg6wm\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.918580 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-utilities\") pod \"certified-operators-vg6wm\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.918644 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg8sd\" (UniqueName: \"kubernetes.io/projected/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-kube-api-access-rg8sd\") pod \"certified-operators-vg6wm\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.919567 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-catalog-content\") pod \"certified-operators-vg6wm\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.919637 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-utilities\") pod \"certified-operators-vg6wm\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.940578 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg8sd\" (UniqueName: \"kubernetes.io/projected/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-kube-api-access-rg8sd\") pod \"certified-operators-vg6wm\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:55 crc kubenswrapper[4910]: I0226 22:44:55.991972 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:44:56 crc kubenswrapper[4910]: I0226 22:44:56.507514 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vg6wm"] Feb 26 22:44:57 crc kubenswrapper[4910]: I0226 22:44:57.478038 4910 generic.go:334] "Generic (PLEG): container finished" podID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerID="ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0" exitCode=0 Feb 26 22:44:57 crc kubenswrapper[4910]: I0226 22:44:57.478216 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6wm" event={"ID":"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f","Type":"ContainerDied","Data":"ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0"} Feb 26 22:44:57 crc kubenswrapper[4910]: I0226 22:44:57.478458 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6wm" event={"ID":"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f","Type":"ContainerStarted","Data":"16ee3e937ec21623a56930ac757b3b4454243d94b286bd0b2af39bf52bf9ac85"} Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.447973 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b7krq"] Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.458532 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.478874 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7krq"] Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.494401 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6wm" event={"ID":"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f","Type":"ContainerStarted","Data":"a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4"} Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.593187 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq6nf\" (UniqueName: \"kubernetes.io/projected/f7da9e09-829d-4b94-8571-e24927235b78-kube-api-access-pq6nf\") pod \"redhat-marketplace-b7krq\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.593243 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-utilities\") pod \"redhat-marketplace-b7krq\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.593344 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-catalog-content\") pod \"redhat-marketplace-b7krq\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.695192 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq6nf\" (UniqueName: \"kubernetes.io/projected/f7da9e09-829d-4b94-8571-e24927235b78-kube-api-access-pq6nf\") pod \"redhat-marketplace-b7krq\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.695248 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-utilities\") pod \"redhat-marketplace-b7krq\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.695333 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-catalog-content\") pod \"redhat-marketplace-b7krq\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.695978 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-catalog-content\") pod \"redhat-marketplace-b7krq\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.696823 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-utilities\") pod \"redhat-marketplace-b7krq\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.720091 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq6nf\" (UniqueName: \"kubernetes.io/projected/f7da9e09-829d-4b94-8571-e24927235b78-kube-api-access-pq6nf\") pod \"redhat-marketplace-b7krq\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:58 crc kubenswrapper[4910]: I0226 22:44:58.789480 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:44:59 crc kubenswrapper[4910]: I0226 22:44:59.275256 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7krq"] Feb 26 22:44:59 crc kubenswrapper[4910]: W0226 22:44:59.285331 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7da9e09_829d_4b94_8571_e24927235b78.slice/crio-02b6fa7dbf267a42a480ea7f2bb13229075c996256d37e70c2ec0388d5f907be WatchSource:0}: Error finding container 02b6fa7dbf267a42a480ea7f2bb13229075c996256d37e70c2ec0388d5f907be: Status 404 returned error can't find the container with id 02b6fa7dbf267a42a480ea7f2bb13229075c996256d37e70c2ec0388d5f907be Feb 26 22:44:59 crc kubenswrapper[4910]: I0226 22:44:59.504018 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7krq" event={"ID":"f7da9e09-829d-4b94-8571-e24927235b78","Type":"ContainerStarted","Data":"1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9"} Feb 26 22:44:59 crc kubenswrapper[4910]: I0226 22:44:59.504291 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7krq" event={"ID":"f7da9e09-829d-4b94-8571-e24927235b78","Type":"ContainerStarted","Data":"02b6fa7dbf267a42a480ea7f2bb13229075c996256d37e70c2ec0388d5f907be"} Feb 26 22:44:59 crc kubenswrapper[4910]: I0226 22:44:59.507833 4910 generic.go:334] "Generic (PLEG): container finished" podID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerID="a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4" exitCode=0 Feb 26 22:44:59 crc kubenswrapper[4910]: I0226 22:44:59.507888 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6wm" event={"ID":"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f","Type":"ContainerDied","Data":"a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4"} Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.163600 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn"] Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.166073 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.168534 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.171460 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.177531 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn"] Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.334770 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e1155f6-03b8-45fd-b083-a9e4b480ae55-secret-volume\") pod \"collect-profiles-29535765-rrcrn\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.334826 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e1155f6-03b8-45fd-b083-a9e4b480ae55-config-volume\") pod \"collect-profiles-29535765-rrcrn\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.334958 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7h26\" (UniqueName: \"kubernetes.io/projected/4e1155f6-03b8-45fd-b083-a9e4b480ae55-kube-api-access-d7h26\") pod \"collect-profiles-29535765-rrcrn\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.436446 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e1155f6-03b8-45fd-b083-a9e4b480ae55-secret-volume\") pod \"collect-profiles-29535765-rrcrn\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.436491 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e1155f6-03b8-45fd-b083-a9e4b480ae55-config-volume\") pod \"collect-profiles-29535765-rrcrn\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.436626 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7h26\" (UniqueName: \"kubernetes.io/projected/4e1155f6-03b8-45fd-b083-a9e4b480ae55-kube-api-access-d7h26\") pod \"collect-profiles-29535765-rrcrn\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.437376 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e1155f6-03b8-45fd-b083-a9e4b480ae55-config-volume\") pod \"collect-profiles-29535765-rrcrn\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.446110 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e1155f6-03b8-45fd-b083-a9e4b480ae55-secret-volume\") pod \"collect-profiles-29535765-rrcrn\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.454136 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7h26\" (UniqueName: \"kubernetes.io/projected/4e1155f6-03b8-45fd-b083-a9e4b480ae55-kube-api-access-d7h26\") pod \"collect-profiles-29535765-rrcrn\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.492775 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.528029 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6wm" event={"ID":"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f","Type":"ContainerStarted","Data":"efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d"} Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.533634 4910 generic.go:334] "Generic (PLEG): container finished" podID="f7da9e09-829d-4b94-8571-e24927235b78" containerID="1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9" exitCode=0 Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.533682 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7krq" event={"ID":"f7da9e09-829d-4b94-8571-e24927235b78","Type":"ContainerDied","Data":"1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9"} Feb 26 22:45:00 crc kubenswrapper[4910]: I0226 22:45:00.570699 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vg6wm" podStartSLOduration=3.129060159 podStartE2EDuration="5.570680965s" podCreationTimestamp="2026-02-26 22:44:55 +0000 UTC" firstStartedPulling="2026-02-26 22:44:57.480952385 +0000 UTC m=+2982.560442966" lastFinishedPulling="2026-02-26 22:44:59.922573231 +0000 UTC m=+2985.002063772" observedRunningTime="2026-02-26 22:45:00.554831553 +0000 UTC m=+2985.634322104" watchObservedRunningTime="2026-02-26 22:45:00.570680965 +0000 UTC m=+2985.650171526" Feb 26 22:45:01 crc kubenswrapper[4910]: I0226 22:45:01.047351 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn"] Feb 26 22:45:01 crc kubenswrapper[4910]: I0226 22:45:01.554697 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7krq" event={"ID":"f7da9e09-829d-4b94-8571-e24927235b78","Type":"ContainerStarted","Data":"d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187"} Feb 26 22:45:01 crc kubenswrapper[4910]: I0226 22:45:01.560202 4910 generic.go:334] "Generic (PLEG): container finished" podID="4e1155f6-03b8-45fd-b083-a9e4b480ae55" containerID="ffb5f5a51a7e32207a6ef64da75e2815cea3399b31533612d3393e71fb8f47b5" exitCode=0 Feb 26 22:45:01 crc kubenswrapper[4910]: I0226 22:45:01.560295 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" event={"ID":"4e1155f6-03b8-45fd-b083-a9e4b480ae55","Type":"ContainerDied","Data":"ffb5f5a51a7e32207a6ef64da75e2815cea3399b31533612d3393e71fb8f47b5"} Feb 26 22:45:01 crc kubenswrapper[4910]: I0226 22:45:01.560357 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" event={"ID":"4e1155f6-03b8-45fd-b083-a9e4b480ae55","Type":"ContainerStarted","Data":"db37530647b4715ab6807c613ec5af81621e237c20a74cf3d97d31f674a0b6d7"} Feb 26 22:45:01 crc kubenswrapper[4910]: E0226 22:45:01.641563 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e1155f6_03b8_45fd_b083_a9e4b480ae55.slice/crio-conmon-ffb5f5a51a7e32207a6ef64da75e2815cea3399b31533612d3393e71fb8f47b5.scope\": RecentStats: unable to find data in memory cache]" Feb 26 22:45:02 crc kubenswrapper[4910]: I0226 22:45:02.578393 4910 generic.go:334] "Generic (PLEG): container finished" podID="f7da9e09-829d-4b94-8571-e24927235b78" containerID="d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187" exitCode=0 Feb 26 22:45:02 crc kubenswrapper[4910]: I0226 22:45:02.578528 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7krq" event={"ID":"f7da9e09-829d-4b94-8571-e24927235b78","Type":"ContainerDied","Data":"d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187"} Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.111728 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.214640 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e1155f6-03b8-45fd-b083-a9e4b480ae55-config-volume\") pod \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.215072 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7h26\" (UniqueName: \"kubernetes.io/projected/4e1155f6-03b8-45fd-b083-a9e4b480ae55-kube-api-access-d7h26\") pod \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.215126 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e1155f6-03b8-45fd-b083-a9e4b480ae55-secret-volume\") pod \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\" (UID: \"4e1155f6-03b8-45fd-b083-a9e4b480ae55\") " Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.215509 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1155f6-03b8-45fd-b083-a9e4b480ae55-config-volume" (OuterVolumeSpecName: "config-volume") pod "4e1155f6-03b8-45fd-b083-a9e4b480ae55" (UID: "4e1155f6-03b8-45fd-b083-a9e4b480ae55"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.216275 4910 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e1155f6-03b8-45fd-b083-a9e4b480ae55-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.222269 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1155f6-03b8-45fd-b083-a9e4b480ae55-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4e1155f6-03b8-45fd-b083-a9e4b480ae55" (UID: "4e1155f6-03b8-45fd-b083-a9e4b480ae55"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.222931 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1155f6-03b8-45fd-b083-a9e4b480ae55-kube-api-access-d7h26" (OuterVolumeSpecName: "kube-api-access-d7h26") pod "4e1155f6-03b8-45fd-b083-a9e4b480ae55" (UID: "4e1155f6-03b8-45fd-b083-a9e4b480ae55"). InnerVolumeSpecName "kube-api-access-d7h26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.318044 4910 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e1155f6-03b8-45fd-b083-a9e4b480ae55-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.318090 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7h26\" (UniqueName: \"kubernetes.io/projected/4e1155f6-03b8-45fd-b083-a9e4b480ae55-kube-api-access-d7h26\") on node \"crc\" DevicePath \"\"" Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.608727 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" event={"ID":"4e1155f6-03b8-45fd-b083-a9e4b480ae55","Type":"ContainerDied","Data":"db37530647b4715ab6807c613ec5af81621e237c20a74cf3d97d31f674a0b6d7"} Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.608777 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db37530647b4715ab6807c613ec5af81621e237c20a74cf3d97d31f674a0b6d7" Feb 26 22:45:03 crc kubenswrapper[4910]: I0226 22:45:03.608858 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535765-rrcrn" Feb 26 22:45:04 crc kubenswrapper[4910]: I0226 22:45:04.229923 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4"] Feb 26 22:45:04 crc kubenswrapper[4910]: I0226 22:45:04.239289 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535720-ghhr4"] Feb 26 22:45:04 crc kubenswrapper[4910]: I0226 22:45:04.619372 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7krq" event={"ID":"f7da9e09-829d-4b94-8571-e24927235b78","Type":"ContainerStarted","Data":"3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753"} Feb 26 22:45:04 crc kubenswrapper[4910]: I0226 22:45:04.641266 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b7krq" podStartSLOduration=3.741133273 podStartE2EDuration="6.641250573s" podCreationTimestamp="2026-02-26 22:44:58 +0000 UTC" firstStartedPulling="2026-02-26 22:45:00.535818027 +0000 UTC m=+2985.615308578" lastFinishedPulling="2026-02-26 22:45:03.435935307 +0000 UTC m=+2988.515425878" observedRunningTime="2026-02-26 22:45:04.633799991 +0000 UTC m=+2989.713290532" watchObservedRunningTime="2026-02-26 22:45:04.641250573 +0000 UTC m=+2989.720741104" Feb 26 22:45:05 crc kubenswrapper[4910]: I0226 22:45:05.921143 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f16dbf5-b263-4e40-b38b-d615de6d7b2c" path="/var/lib/kubelet/pods/8f16dbf5-b263-4e40-b38b-d615de6d7b2c/volumes" Feb 26 22:45:05 crc kubenswrapper[4910]: I0226 22:45:05.992175 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:45:05 crc kubenswrapper[4910]: I0226 22:45:05.992237 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:45:06 crc kubenswrapper[4910]: I0226 22:45:06.057131 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:45:06 crc kubenswrapper[4910]: I0226 22:45:06.764127 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:45:07 crc kubenswrapper[4910]: I0226 22:45:07.071503 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vg6wm"] Feb 26 22:45:08 crc kubenswrapper[4910]: I0226 22:45:08.673500 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vg6wm" podUID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerName="registry-server" containerID="cri-o://efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d" gracePeriod=2 Feb 26 22:45:08 crc kubenswrapper[4910]: I0226 22:45:08.792395 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:45:08 crc kubenswrapper[4910]: I0226 22:45:08.799595 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:45:08 crc kubenswrapper[4910]: I0226 22:45:08.894624 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.282759 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.381620 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg8sd\" (UniqueName: \"kubernetes.io/projected/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-kube-api-access-rg8sd\") pod \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.381832 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-utilities\") pod \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.382435 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-utilities" (OuterVolumeSpecName: "utilities") pod "cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" (UID: "cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.382531 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-catalog-content\") pod \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\" (UID: \"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f\") " Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.388812 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-kube-api-access-rg8sd" (OuterVolumeSpecName: "kube-api-access-rg8sd") pod "cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" (UID: "cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f"). InnerVolumeSpecName "kube-api-access-rg8sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.401867 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.401898 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg8sd\" (UniqueName: \"kubernetes.io/projected/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-kube-api-access-rg8sd\") on node \"crc\" DevicePath \"\"" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.435205 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" (UID: "cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.505439 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.702947 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vg6wm" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.703667 4910 generic.go:334] "Generic (PLEG): container finished" podID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerID="efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d" exitCode=0 Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.703004 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6wm" event={"ID":"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f","Type":"ContainerDied","Data":"efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d"} Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.703997 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6wm" event={"ID":"cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f","Type":"ContainerDied","Data":"16ee3e937ec21623a56930ac757b3b4454243d94b286bd0b2af39bf52bf9ac85"} Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.704045 4910 scope.go:117] "RemoveContainer" containerID="efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.747047 4910 scope.go:117] "RemoveContainer" containerID="a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.773503 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vg6wm"] Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.788908 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vg6wm"] Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.790937 4910 scope.go:117] "RemoveContainer" containerID="ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.798126 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.857643 4910 scope.go:117] "RemoveContainer" containerID="efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d" Feb 26 22:45:09 crc kubenswrapper[4910]: E0226 22:45:09.858386 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d\": container with ID starting with efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d not found: ID does not exist" containerID="efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.858423 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d"} err="failed to get container status \"efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d\": rpc error: code = NotFound desc = could not find container \"efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d\": container with ID starting with efe638b478800c60affcf56723f68392c11996ebd12d00aacf7232007ad7446d not found: ID does not exist" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.858448 4910 scope.go:117] "RemoveContainer" containerID="a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4" Feb 26 22:45:09 crc kubenswrapper[4910]: E0226 22:45:09.859076 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4\": container with ID starting with a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4 not found: ID does not exist" containerID="a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.859151 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4"} err="failed to get container status \"a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4\": rpc error: code = NotFound desc = could not find container \"a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4\": container with ID starting with a69b28676651a5e9132db5b4181dd5fa27f92c9de42bf9f5b7de3908a1cda2c4 not found: ID does not exist" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.859224 4910 scope.go:117] "RemoveContainer" containerID="ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0" Feb 26 22:45:09 crc kubenswrapper[4910]: E0226 22:45:09.859828 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0\": container with ID starting with ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0 not found: ID does not exist" containerID="ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.859901 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0"} err="failed to get container status \"ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0\": rpc error: code = NotFound desc = could not find container \"ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0\": container with ID starting with ac11e1f190fbc32e1ed3bd2a7e9a270c6c51b11f31359df37dafdd7c4407adb0 not found: ID does not exist" Feb 26 22:45:09 crc kubenswrapper[4910]: I0226 22:45:09.919345 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" path="/var/lib/kubelet/pods/cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f/volumes" Feb 26 22:45:12 crc kubenswrapper[4910]: I0226 22:45:12.044901 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7krq"] Feb 26 22:45:12 crc kubenswrapper[4910]: I0226 22:45:12.746961 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b7krq" podUID="f7da9e09-829d-4b94-8571-e24927235b78" containerName="registry-server" containerID="cri-o://3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753" gracePeriod=2 Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.422633 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.605589 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-utilities\") pod \"f7da9e09-829d-4b94-8571-e24927235b78\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.605705 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-catalog-content\") pod \"f7da9e09-829d-4b94-8571-e24927235b78\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.605753 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq6nf\" (UniqueName: \"kubernetes.io/projected/f7da9e09-829d-4b94-8571-e24927235b78-kube-api-access-pq6nf\") pod \"f7da9e09-829d-4b94-8571-e24927235b78\" (UID: \"f7da9e09-829d-4b94-8571-e24927235b78\") " Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.606610 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-utilities" (OuterVolumeSpecName: "utilities") pod "f7da9e09-829d-4b94-8571-e24927235b78" (UID: "f7da9e09-829d-4b94-8571-e24927235b78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.614534 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7da9e09-829d-4b94-8571-e24927235b78-kube-api-access-pq6nf" (OuterVolumeSpecName: "kube-api-access-pq6nf") pod "f7da9e09-829d-4b94-8571-e24927235b78" (UID: "f7da9e09-829d-4b94-8571-e24927235b78"). InnerVolumeSpecName "kube-api-access-pq6nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.654371 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7da9e09-829d-4b94-8571-e24927235b78" (UID: "f7da9e09-829d-4b94-8571-e24927235b78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.709189 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.709231 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7da9e09-829d-4b94-8571-e24927235b78-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.709246 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq6nf\" (UniqueName: \"kubernetes.io/projected/f7da9e09-829d-4b94-8571-e24927235b78-kube-api-access-pq6nf\") on node \"crc\" DevicePath \"\"" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.764360 4910 generic.go:334] "Generic (PLEG): container finished" podID="f7da9e09-829d-4b94-8571-e24927235b78" containerID="3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753" exitCode=0 Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.764430 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7krq" event={"ID":"f7da9e09-829d-4b94-8571-e24927235b78","Type":"ContainerDied","Data":"3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753"} Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.764480 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7krq" event={"ID":"f7da9e09-829d-4b94-8571-e24927235b78","Type":"ContainerDied","Data":"02b6fa7dbf267a42a480ea7f2bb13229075c996256d37e70c2ec0388d5f907be"} Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.764523 4910 scope.go:117] "RemoveContainer" containerID="3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.764761 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7krq" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.808083 4910 scope.go:117] "RemoveContainer" containerID="d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.841041 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7krq"] Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.849902 4910 scope.go:117] "RemoveContainer" containerID="1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.853665 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7krq"] Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.929002 4910 scope.go:117] "RemoveContainer" containerID="3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753" Feb 26 22:45:13 crc kubenswrapper[4910]: E0226 22:45:13.931435 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753\": container with ID starting with 3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753 not found: ID does not exist" containerID="3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.931508 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753"} err="failed to get container status \"3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753\": rpc error: code = NotFound desc = could not find container \"3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753\": container with ID starting with 3013cf19bc647e9c71efe4f9cc0a4140d36dc08425e933947cd24033c02df753 not found: ID does not exist" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.931557 4910 scope.go:117] "RemoveContainer" containerID="d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187" Feb 26 22:45:13 crc kubenswrapper[4910]: E0226 22:45:13.932433 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187\": container with ID starting with d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187 not found: ID does not exist" containerID="d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.932479 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187"} err="failed to get container status \"d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187\": rpc error: code = NotFound desc = could not find container \"d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187\": container with ID starting with d79170495039efa406740b711cf59ec9dc685a9f1257e1359bdc3bf82a6dc187 not found: ID does not exist" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.932508 4910 scope.go:117] "RemoveContainer" containerID="1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9" Feb 26 22:45:13 crc kubenswrapper[4910]: E0226 22:45:13.933065 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9\": container with ID starting with 1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9 not found: ID does not exist" containerID="1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.933127 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9"} err="failed to get container status \"1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9\": rpc error: code = NotFound desc = could not find container \"1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9\": container with ID starting with 1ab28d3d927fb411fe16e55a2b9a4979dbb4fbeb4fc63d3e9926b1862a3c4cb9 not found: ID does not exist" Feb 26 22:45:13 crc kubenswrapper[4910]: I0226 22:45:13.942279 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7da9e09-829d-4b94-8571-e24927235b78" path="/var/lib/kubelet/pods/f7da9e09-829d-4b94-8571-e24927235b78/volumes" Feb 26 22:45:25 crc kubenswrapper[4910]: I0226 22:45:25.727388 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:45:25 crc kubenswrapper[4910]: I0226 22:45:25.727964 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:45:25 crc kubenswrapper[4910]: I0226 22:45:25.728019 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:45:25 crc kubenswrapper[4910]: I0226 22:45:25.728817 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf4404a57b9e158f4c76a2539469a0f43575a0b95fd644e44dec95e0304a9ede"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:45:25 crc kubenswrapper[4910]: I0226 22:45:25.728883 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://bf4404a57b9e158f4c76a2539469a0f43575a0b95fd644e44dec95e0304a9ede" gracePeriod=600 Feb 26 22:45:25 crc kubenswrapper[4910]: I0226 22:45:25.930535 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="bf4404a57b9e158f4c76a2539469a0f43575a0b95fd644e44dec95e0304a9ede" exitCode=0 Feb 26 22:45:25 crc kubenswrapper[4910]: I0226 22:45:25.930943 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"bf4404a57b9e158f4c76a2539469a0f43575a0b95fd644e44dec95e0304a9ede"} Feb 26 22:45:25 crc kubenswrapper[4910]: I0226 22:45:25.930988 4910 scope.go:117] "RemoveContainer" containerID="05e6ec27d91032ac9f8be2bb836087e6b0c6089147325a8f32a1e5e548a5ce20" Feb 26 22:45:26 crc kubenswrapper[4910]: I0226 22:45:26.941615 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526"} Feb 26 22:45:39 crc kubenswrapper[4910]: I0226 22:45:39.567915 4910 scope.go:117] "RemoveContainer" containerID="d7541c5a27dfa8fced56b1bff62df57958de7cd11dd4ec435a8bbb91dee05ea6" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.161669 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535766-tc9lz"] Feb 26 22:46:00 crc kubenswrapper[4910]: E0226 22:46:00.162927 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7da9e09-829d-4b94-8571-e24927235b78" containerName="extract-content" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.162948 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7da9e09-829d-4b94-8571-e24927235b78" containerName="extract-content" Feb 26 22:46:00 crc kubenswrapper[4910]: E0226 22:46:00.162964 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7da9e09-829d-4b94-8571-e24927235b78" containerName="registry-server" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.162976 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7da9e09-829d-4b94-8571-e24927235b78" containerName="registry-server" Feb 26 22:46:00 crc kubenswrapper[4910]: E0226 22:46:00.163002 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerName="extract-content" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.163015 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerName="extract-content" Feb 26 22:46:00 crc kubenswrapper[4910]: E0226 22:46:00.163054 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1155f6-03b8-45fd-b083-a9e4b480ae55" containerName="collect-profiles" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.163067 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1155f6-03b8-45fd-b083-a9e4b480ae55" containerName="collect-profiles" Feb 26 22:46:00 crc kubenswrapper[4910]: E0226 22:46:00.163101 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7da9e09-829d-4b94-8571-e24927235b78" containerName="extract-utilities" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.163114 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7da9e09-829d-4b94-8571-e24927235b78" containerName="extract-utilities" Feb 26 22:46:00 crc kubenswrapper[4910]: E0226 22:46:00.163144 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerName="registry-server" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.163155 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerName="registry-server" Feb 26 22:46:00 crc kubenswrapper[4910]: E0226 22:46:00.163202 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerName="extract-utilities" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.163214 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerName="extract-utilities" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.163599 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf15ea2c-f8d0-4192-8ff6-ee9ce282ad4f" containerName="registry-server" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.163624 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1155f6-03b8-45fd-b083-a9e4b480ae55" containerName="collect-profiles" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.163646 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7da9e09-829d-4b94-8571-e24927235b78" containerName="registry-server" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.164884 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535766-tc9lz" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.166986 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.167387 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.168392 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.174041 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535766-tc9lz"] Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.273511 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mndbj\" (UniqueName: \"kubernetes.io/projected/3c67a2c7-bad2-416b-ba3a-9b81e770990e-kube-api-access-mndbj\") pod \"auto-csr-approver-29535766-tc9lz\" (UID: \"3c67a2c7-bad2-416b-ba3a-9b81e770990e\") " pod="openshift-infra/auto-csr-approver-29535766-tc9lz" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.375411 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mndbj\" (UniqueName: \"kubernetes.io/projected/3c67a2c7-bad2-416b-ba3a-9b81e770990e-kube-api-access-mndbj\") pod \"auto-csr-approver-29535766-tc9lz\" (UID: \"3c67a2c7-bad2-416b-ba3a-9b81e770990e\") " pod="openshift-infra/auto-csr-approver-29535766-tc9lz" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.403386 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mndbj\" (UniqueName: \"kubernetes.io/projected/3c67a2c7-bad2-416b-ba3a-9b81e770990e-kube-api-access-mndbj\") pod \"auto-csr-approver-29535766-tc9lz\" (UID: \"3c67a2c7-bad2-416b-ba3a-9b81e770990e\") " pod="openshift-infra/auto-csr-approver-29535766-tc9lz" Feb 26 22:46:00 crc kubenswrapper[4910]: I0226 22:46:00.498073 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535766-tc9lz" Feb 26 22:46:01 crc kubenswrapper[4910]: I0226 22:46:01.148635 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535766-tc9lz"] Feb 26 22:46:01 crc kubenswrapper[4910]: I0226 22:46:01.770676 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535766-tc9lz" event={"ID":"3c67a2c7-bad2-416b-ba3a-9b81e770990e","Type":"ContainerStarted","Data":"0268c6cb9b241c7d4fb8e120d46ff961cb31ba4a4fdfef527e2dddfa6a961321"} Feb 26 22:46:02 crc kubenswrapper[4910]: I0226 22:46:02.782977 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535766-tc9lz" event={"ID":"3c67a2c7-bad2-416b-ba3a-9b81e770990e","Type":"ContainerStarted","Data":"c57ffa546cf87c80392b341cfcf60f6afd0af9ec0d8bc5803110a843e3067e1a"} Feb 26 22:46:02 crc kubenswrapper[4910]: I0226 22:46:02.801899 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535766-tc9lz" podStartSLOduration=1.8387098480000001 podStartE2EDuration="2.801882576s" podCreationTimestamp="2026-02-26 22:46:00 +0000 UTC" firstStartedPulling="2026-02-26 22:46:01.164839903 +0000 UTC m=+3046.244330444" lastFinishedPulling="2026-02-26 22:46:02.128012621 +0000 UTC m=+3047.207503172" observedRunningTime="2026-02-26 22:46:02.798397222 +0000 UTC m=+3047.877887773" watchObservedRunningTime="2026-02-26 22:46:02.801882576 +0000 UTC m=+3047.881373117" Feb 26 22:46:03 crc kubenswrapper[4910]: I0226 22:46:03.824344 4910 generic.go:334] "Generic (PLEG): container finished" podID="3c67a2c7-bad2-416b-ba3a-9b81e770990e" containerID="c57ffa546cf87c80392b341cfcf60f6afd0af9ec0d8bc5803110a843e3067e1a" exitCode=0 Feb 26 22:46:03 crc kubenswrapper[4910]: I0226 22:46:03.824541 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535766-tc9lz" event={"ID":"3c67a2c7-bad2-416b-ba3a-9b81e770990e","Type":"ContainerDied","Data":"c57ffa546cf87c80392b341cfcf60f6afd0af9ec0d8bc5803110a843e3067e1a"} Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.299148 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535766-tc9lz" Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.395597 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mndbj\" (UniqueName: \"kubernetes.io/projected/3c67a2c7-bad2-416b-ba3a-9b81e770990e-kube-api-access-mndbj\") pod \"3c67a2c7-bad2-416b-ba3a-9b81e770990e\" (UID: \"3c67a2c7-bad2-416b-ba3a-9b81e770990e\") " Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.403523 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c67a2c7-bad2-416b-ba3a-9b81e770990e-kube-api-access-mndbj" (OuterVolumeSpecName: "kube-api-access-mndbj") pod "3c67a2c7-bad2-416b-ba3a-9b81e770990e" (UID: "3c67a2c7-bad2-416b-ba3a-9b81e770990e"). InnerVolumeSpecName "kube-api-access-mndbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.498643 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mndbj\" (UniqueName: \"kubernetes.io/projected/3c67a2c7-bad2-416b-ba3a-9b81e770990e-kube-api-access-mndbj\") on node \"crc\" DevicePath \"\"" Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.854429 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535766-tc9lz" event={"ID":"3c67a2c7-bad2-416b-ba3a-9b81e770990e","Type":"ContainerDied","Data":"0268c6cb9b241c7d4fb8e120d46ff961cb31ba4a4fdfef527e2dddfa6a961321"} Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.854483 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0268c6cb9b241c7d4fb8e120d46ff961cb31ba4a4fdfef527e2dddfa6a961321" Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.854536 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535766-tc9lz" Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.887068 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535760-7zt9x"] Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.899098 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535760-7zt9x"] Feb 26 22:46:05 crc kubenswrapper[4910]: I0226 22:46:05.921384 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9899786c-c81c-416a-bded-79c98b9240fa" path="/var/lib/kubelet/pods/9899786c-c81c-416a-bded-79c98b9240fa/volumes" Feb 26 22:46:39 crc kubenswrapper[4910]: I0226 22:46:39.708123 4910 scope.go:117] "RemoveContainer" containerID="feb000216a7779207592fcfdd87ce29b20747033f65306c85d6bf5f5e892ed3d" Feb 26 22:47:55 crc kubenswrapper[4910]: I0226 22:47:55.728084 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:47:55 crc kubenswrapper[4910]: I0226 22:47:55.729218 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.165721 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535768-nbr6s"] Feb 26 22:48:00 crc kubenswrapper[4910]: E0226 22:48:00.166855 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c67a2c7-bad2-416b-ba3a-9b81e770990e" containerName="oc" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.166878 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c67a2c7-bad2-416b-ba3a-9b81e770990e" containerName="oc" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.167303 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c67a2c7-bad2-416b-ba3a-9b81e770990e" containerName="oc" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.168564 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535768-nbr6s" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.171582 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.174714 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.177057 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.183834 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535768-nbr6s"] Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.213073 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4qnx\" (UniqueName: \"kubernetes.io/projected/deb17ae0-31dc-4ef5-b296-56e2c1980948-kube-api-access-s4qnx\") pod \"auto-csr-approver-29535768-nbr6s\" (UID: \"deb17ae0-31dc-4ef5-b296-56e2c1980948\") " pod="openshift-infra/auto-csr-approver-29535768-nbr6s" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.315195 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4qnx\" (UniqueName: \"kubernetes.io/projected/deb17ae0-31dc-4ef5-b296-56e2c1980948-kube-api-access-s4qnx\") pod \"auto-csr-approver-29535768-nbr6s\" (UID: \"deb17ae0-31dc-4ef5-b296-56e2c1980948\") " pod="openshift-infra/auto-csr-approver-29535768-nbr6s" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.347239 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4qnx\" (UniqueName: \"kubernetes.io/projected/deb17ae0-31dc-4ef5-b296-56e2c1980948-kube-api-access-s4qnx\") pod \"auto-csr-approver-29535768-nbr6s\" (UID: \"deb17ae0-31dc-4ef5-b296-56e2c1980948\") " pod="openshift-infra/auto-csr-approver-29535768-nbr6s" Feb 26 22:48:00 crc kubenswrapper[4910]: I0226 22:48:00.503529 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535768-nbr6s" Feb 26 22:48:01 crc kubenswrapper[4910]: I0226 22:48:01.121724 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535768-nbr6s"] Feb 26 22:48:01 crc kubenswrapper[4910]: I0226 22:48:01.241011 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535768-nbr6s" event={"ID":"deb17ae0-31dc-4ef5-b296-56e2c1980948","Type":"ContainerStarted","Data":"c1dba9206174c45ff36026307657a8ff419914819bc78aa4c89fb9a9dd5e3f74"} Feb 26 22:48:03 crc kubenswrapper[4910]: I0226 22:48:03.266740 4910 generic.go:334] "Generic (PLEG): container finished" podID="deb17ae0-31dc-4ef5-b296-56e2c1980948" containerID="5b849f17460b7d7230c591a0d2993eac040d0792322cfe0223f5c2d280b6b888" exitCode=0 Feb 26 22:48:03 crc kubenswrapper[4910]: I0226 22:48:03.266815 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535768-nbr6s" event={"ID":"deb17ae0-31dc-4ef5-b296-56e2c1980948","Type":"ContainerDied","Data":"5b849f17460b7d7230c591a0d2993eac040d0792322cfe0223f5c2d280b6b888"} Feb 26 22:48:04 crc kubenswrapper[4910]: I0226 22:48:04.788866 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535768-nbr6s" Feb 26 22:48:04 crc kubenswrapper[4910]: I0226 22:48:04.841794 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4qnx\" (UniqueName: \"kubernetes.io/projected/deb17ae0-31dc-4ef5-b296-56e2c1980948-kube-api-access-s4qnx\") pod \"deb17ae0-31dc-4ef5-b296-56e2c1980948\" (UID: \"deb17ae0-31dc-4ef5-b296-56e2c1980948\") " Feb 26 22:48:04 crc kubenswrapper[4910]: I0226 22:48:04.851563 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb17ae0-31dc-4ef5-b296-56e2c1980948-kube-api-access-s4qnx" (OuterVolumeSpecName: "kube-api-access-s4qnx") pod "deb17ae0-31dc-4ef5-b296-56e2c1980948" (UID: "deb17ae0-31dc-4ef5-b296-56e2c1980948"). InnerVolumeSpecName "kube-api-access-s4qnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:48:04 crc kubenswrapper[4910]: I0226 22:48:04.944290 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4qnx\" (UniqueName: \"kubernetes.io/projected/deb17ae0-31dc-4ef5-b296-56e2c1980948-kube-api-access-s4qnx\") on node \"crc\" DevicePath \"\"" Feb 26 22:48:05 crc kubenswrapper[4910]: I0226 22:48:05.290334 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535768-nbr6s" event={"ID":"deb17ae0-31dc-4ef5-b296-56e2c1980948","Type":"ContainerDied","Data":"c1dba9206174c45ff36026307657a8ff419914819bc78aa4c89fb9a9dd5e3f74"} Feb 26 22:48:05 crc kubenswrapper[4910]: I0226 22:48:05.290659 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1dba9206174c45ff36026307657a8ff419914819bc78aa4c89fb9a9dd5e3f74" Feb 26 22:48:05 crc kubenswrapper[4910]: I0226 22:48:05.290423 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535768-nbr6s" Feb 26 22:48:05 crc kubenswrapper[4910]: I0226 22:48:05.883468 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535762-ghrnr"] Feb 26 22:48:05 crc kubenswrapper[4910]: I0226 22:48:05.896092 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535762-ghrnr"] Feb 26 22:48:05 crc kubenswrapper[4910]: I0226 22:48:05.932860 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c" path="/var/lib/kubelet/pods/ba3cf2a4-0301-4aa1-99d3-dbd68d5b522c/volumes" Feb 26 22:48:25 crc kubenswrapper[4910]: I0226 22:48:25.727568 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:48:25 crc kubenswrapper[4910]: I0226 22:48:25.728017 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:48:39 crc kubenswrapper[4910]: I0226 22:48:39.836998 4910 scope.go:117] "RemoveContainer" containerID="b19499347bd724161b1c628b04f58854267f581744d29bf70b890c422ba36760" Feb 26 22:48:55 crc kubenswrapper[4910]: I0226 22:48:55.727747 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:48:55 crc kubenswrapper[4910]: I0226 22:48:55.728646 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:48:55 crc kubenswrapper[4910]: I0226 22:48:55.728735 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:48:55 crc kubenswrapper[4910]: I0226 22:48:55.729997 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:48:55 crc kubenswrapper[4910]: I0226 22:48:55.730156 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" gracePeriod=600 Feb 26 22:48:55 crc kubenswrapper[4910]: E0226 22:48:55.890485 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:48:55 crc kubenswrapper[4910]: I0226 22:48:55.904948 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" exitCode=0 Feb 26 22:48:55 crc kubenswrapper[4910]: I0226 22:48:55.922156 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526"} Feb 26 22:48:55 crc kubenswrapper[4910]: I0226 22:48:55.922264 4910 scope.go:117] "RemoveContainer" containerID="bf4404a57b9e158f4c76a2539469a0f43575a0b95fd644e44dec95e0304a9ede" Feb 26 22:48:55 crc kubenswrapper[4910]: I0226 22:48:55.923001 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:48:55 crc kubenswrapper[4910]: E0226 22:48:55.923274 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:49:06 crc kubenswrapper[4910]: I0226 22:49:06.902615 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:49:06 crc kubenswrapper[4910]: E0226 22:49:06.903633 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:49:20 crc kubenswrapper[4910]: I0226 22:49:20.901549 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:49:20 crc kubenswrapper[4910]: E0226 22:49:20.902296 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:49:35 crc kubenswrapper[4910]: I0226 22:49:35.921689 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:49:35 crc kubenswrapper[4910]: E0226 22:49:35.922871 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:49:48 crc kubenswrapper[4910]: I0226 22:49:48.901068 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:49:48 crc kubenswrapper[4910]: E0226 22:49:48.901833 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:49:59 crc kubenswrapper[4910]: I0226 22:49:59.902289 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:49:59 crc kubenswrapper[4910]: E0226 22:49:59.903495 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.197050 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535770-4bwth"] Feb 26 22:50:00 crc kubenswrapper[4910]: E0226 22:50:00.198322 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb17ae0-31dc-4ef5-b296-56e2c1980948" containerName="oc" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.198365 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb17ae0-31dc-4ef5-b296-56e2c1980948" containerName="oc" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.198864 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb17ae0-31dc-4ef5-b296-56e2c1980948" containerName="oc" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.200195 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535770-4bwth" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.204403 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.204853 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.207759 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.208918 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535770-4bwth"] Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.262307 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89bgd\" (UniqueName: \"kubernetes.io/projected/7b064433-c619-497a-b40b-fff570eb1331-kube-api-access-89bgd\") pod \"auto-csr-approver-29535770-4bwth\" (UID: \"7b064433-c619-497a-b40b-fff570eb1331\") " pod="openshift-infra/auto-csr-approver-29535770-4bwth" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.375576 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89bgd\" (UniqueName: \"kubernetes.io/projected/7b064433-c619-497a-b40b-fff570eb1331-kube-api-access-89bgd\") pod \"auto-csr-approver-29535770-4bwth\" (UID: \"7b064433-c619-497a-b40b-fff570eb1331\") " pod="openshift-infra/auto-csr-approver-29535770-4bwth" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.401698 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89bgd\" (UniqueName: \"kubernetes.io/projected/7b064433-c619-497a-b40b-fff570eb1331-kube-api-access-89bgd\") pod \"auto-csr-approver-29535770-4bwth\" (UID: \"7b064433-c619-497a-b40b-fff570eb1331\") " pod="openshift-infra/auto-csr-approver-29535770-4bwth" Feb 26 22:50:00 crc kubenswrapper[4910]: I0226 22:50:00.556143 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535770-4bwth" Feb 26 22:50:01 crc kubenswrapper[4910]: I0226 22:50:01.100188 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535770-4bwth"] Feb 26 22:50:01 crc kubenswrapper[4910]: I0226 22:50:01.101359 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:50:01 crc kubenswrapper[4910]: I0226 22:50:01.739138 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535770-4bwth" event={"ID":"7b064433-c619-497a-b40b-fff570eb1331","Type":"ContainerStarted","Data":"348713d78ad6961cf74b8f452b1cf83f6b03ba72856af062aa6b11d0c12c2cad"} Feb 26 22:50:02 crc kubenswrapper[4910]: I0226 22:50:02.757660 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535770-4bwth" event={"ID":"7b064433-c619-497a-b40b-fff570eb1331","Type":"ContainerStarted","Data":"eb19e770bad290d41cc0b95dec8a311e76eb8fab9d99fee311ed7dd68c84de04"} Feb 26 22:50:02 crc kubenswrapper[4910]: I0226 22:50:02.785447 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535770-4bwth" podStartSLOduration=1.7892007460000001 podStartE2EDuration="2.785425792s" podCreationTimestamp="2026-02-26 22:50:00 +0000 UTC" firstStartedPulling="2026-02-26 22:50:01.100709422 +0000 UTC m=+3286.180200003" lastFinishedPulling="2026-02-26 22:50:02.096934508 +0000 UTC m=+3287.176425049" observedRunningTime="2026-02-26 22:50:02.778708869 +0000 UTC m=+3287.858199420" watchObservedRunningTime="2026-02-26 22:50:02.785425792 +0000 UTC m=+3287.864916343" Feb 26 22:50:03 crc kubenswrapper[4910]: I0226 22:50:03.772290 4910 generic.go:334] "Generic (PLEG): container finished" podID="7b064433-c619-497a-b40b-fff570eb1331" containerID="eb19e770bad290d41cc0b95dec8a311e76eb8fab9d99fee311ed7dd68c84de04" exitCode=0 Feb 26 22:50:03 crc kubenswrapper[4910]: I0226 22:50:03.772935 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535770-4bwth" event={"ID":"7b064433-c619-497a-b40b-fff570eb1331","Type":"ContainerDied","Data":"eb19e770bad290d41cc0b95dec8a311e76eb8fab9d99fee311ed7dd68c84de04"} Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.313815 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535770-4bwth" Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.403452 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89bgd\" (UniqueName: \"kubernetes.io/projected/7b064433-c619-497a-b40b-fff570eb1331-kube-api-access-89bgd\") pod \"7b064433-c619-497a-b40b-fff570eb1331\" (UID: \"7b064433-c619-497a-b40b-fff570eb1331\") " Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.415557 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b064433-c619-497a-b40b-fff570eb1331-kube-api-access-89bgd" (OuterVolumeSpecName: "kube-api-access-89bgd") pod "7b064433-c619-497a-b40b-fff570eb1331" (UID: "7b064433-c619-497a-b40b-fff570eb1331"). InnerVolumeSpecName "kube-api-access-89bgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.506278 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89bgd\" (UniqueName: \"kubernetes.io/projected/7b064433-c619-497a-b40b-fff570eb1331-kube-api-access-89bgd\") on node \"crc\" DevicePath \"\"" Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.802215 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535770-4bwth" event={"ID":"7b064433-c619-497a-b40b-fff570eb1331","Type":"ContainerDied","Data":"348713d78ad6961cf74b8f452b1cf83f6b03ba72856af062aa6b11d0c12c2cad"} Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.802275 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="348713d78ad6961cf74b8f452b1cf83f6b03ba72856af062aa6b11d0c12c2cad" Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.802348 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535770-4bwth" Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.862950 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535764-vrnn8"] Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.871871 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535764-vrnn8"] Feb 26 22:50:05 crc kubenswrapper[4910]: I0226 22:50:05.912574 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4875b3ff-72a4-4075-9d13-210476d09ddf" path="/var/lib/kubelet/pods/4875b3ff-72a4-4075-9d13-210476d09ddf/volumes" Feb 26 22:50:14 crc kubenswrapper[4910]: I0226 22:50:14.901777 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:50:14 crc kubenswrapper[4910]: E0226 22:50:14.902858 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:50:29 crc kubenswrapper[4910]: I0226 22:50:29.903416 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:50:29 crc kubenswrapper[4910]: E0226 22:50:29.904713 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:50:40 crc kubenswrapper[4910]: I0226 22:50:40.014224 4910 scope.go:117] "RemoveContainer" containerID="25325538696b793c6b180155afd16dfb0b613eb93958ab162a8b4bfb3bb52f0b" Feb 26 22:50:41 crc kubenswrapper[4910]: I0226 22:50:41.902662 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:50:41 crc kubenswrapper[4910]: E0226 22:50:41.904364 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:50:57 crc kubenswrapper[4910]: I0226 22:50:57.903274 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:50:57 crc kubenswrapper[4910]: E0226 22:50:57.904467 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.479201 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h89rc"] Feb 26 22:51:07 crc kubenswrapper[4910]: E0226 22:51:07.480108 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b064433-c619-497a-b40b-fff570eb1331" containerName="oc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.480121 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b064433-c619-497a-b40b-fff570eb1331" containerName="oc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.480337 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b064433-c619-497a-b40b-fff570eb1331" containerName="oc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.481839 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.499106 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h89rc"] Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.501354 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-catalog-content\") pod \"community-operators-h89rc\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.501532 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrr9\" (UniqueName: \"kubernetes.io/projected/e895875e-1726-45be-95ae-ccf7c8656f92-kube-api-access-5xrr9\") pod \"community-operators-h89rc\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.501668 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-utilities\") pod \"community-operators-h89rc\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.604072 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-catalog-content\") pod \"community-operators-h89rc\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.604323 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xrr9\" (UniqueName: \"kubernetes.io/projected/e895875e-1726-45be-95ae-ccf7c8656f92-kube-api-access-5xrr9\") pod \"community-operators-h89rc\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.604489 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-utilities\") pod \"community-operators-h89rc\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.604598 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-catalog-content\") pod \"community-operators-h89rc\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.604913 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-utilities\") pod \"community-operators-h89rc\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.634256 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xrr9\" (UniqueName: \"kubernetes.io/projected/e895875e-1726-45be-95ae-ccf7c8656f92-kube-api-access-5xrr9\") pod \"community-operators-h89rc\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:07 crc kubenswrapper[4910]: I0226 22:51:07.811478 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:08 crc kubenswrapper[4910]: I0226 22:51:08.314202 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h89rc"] Feb 26 22:51:08 crc kubenswrapper[4910]: I0226 22:51:08.892787 4910 generic.go:334] "Generic (PLEG): container finished" podID="e895875e-1726-45be-95ae-ccf7c8656f92" containerID="b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3" exitCode=0 Feb 26 22:51:08 crc kubenswrapper[4910]: I0226 22:51:08.892868 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h89rc" event={"ID":"e895875e-1726-45be-95ae-ccf7c8656f92","Type":"ContainerDied","Data":"b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3"} Feb 26 22:51:08 crc kubenswrapper[4910]: I0226 22:51:08.893197 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h89rc" event={"ID":"e895875e-1726-45be-95ae-ccf7c8656f92","Type":"ContainerStarted","Data":"90ba6be579f03e7bc291545096ddfe40ccb7e89f616774d10dbb94f11f9c0907"} Feb 26 22:51:09 crc kubenswrapper[4910]: I0226 22:51:09.906208 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:51:09 crc kubenswrapper[4910]: E0226 22:51:09.906767 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:51:09 crc kubenswrapper[4910]: I0226 22:51:09.956205 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h89rc" event={"ID":"e895875e-1726-45be-95ae-ccf7c8656f92","Type":"ContainerStarted","Data":"685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da"} Feb 26 22:51:11 crc kubenswrapper[4910]: I0226 22:51:11.980494 4910 generic.go:334] "Generic (PLEG): container finished" podID="e895875e-1726-45be-95ae-ccf7c8656f92" containerID="685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da" exitCode=0 Feb 26 22:51:11 crc kubenswrapper[4910]: I0226 22:51:11.980582 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h89rc" event={"ID":"e895875e-1726-45be-95ae-ccf7c8656f92","Type":"ContainerDied","Data":"685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da"} Feb 26 22:51:13 crc kubenswrapper[4910]: I0226 22:51:13.000945 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h89rc" event={"ID":"e895875e-1726-45be-95ae-ccf7c8656f92","Type":"ContainerStarted","Data":"a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747"} Feb 26 22:51:13 crc kubenswrapper[4910]: I0226 22:51:13.044128 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h89rc" podStartSLOduration=2.578427649 podStartE2EDuration="6.044103418s" podCreationTimestamp="2026-02-26 22:51:07 +0000 UTC" firstStartedPulling="2026-02-26 22:51:08.895539238 +0000 UTC m=+3353.975029819" lastFinishedPulling="2026-02-26 22:51:12.361215007 +0000 UTC m=+3357.440705588" observedRunningTime="2026-02-26 22:51:13.037915649 +0000 UTC m=+3358.117406230" watchObservedRunningTime="2026-02-26 22:51:13.044103418 +0000 UTC m=+3358.123593989" Feb 26 22:51:17 crc kubenswrapper[4910]: I0226 22:51:17.811658 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:17 crc kubenswrapper[4910]: I0226 22:51:17.812186 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:17 crc kubenswrapper[4910]: I0226 22:51:17.875695 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:18 crc kubenswrapper[4910]: I0226 22:51:18.143220 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:18 crc kubenswrapper[4910]: I0226 22:51:18.225249 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h89rc"] Feb 26 22:51:20 crc kubenswrapper[4910]: I0226 22:51:20.084488 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h89rc" podUID="e895875e-1726-45be-95ae-ccf7c8656f92" containerName="registry-server" containerID="cri-o://a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747" gracePeriod=2 Feb 26 22:51:20 crc kubenswrapper[4910]: I0226 22:51:20.805746 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:20 crc kubenswrapper[4910]: I0226 22:51:20.934151 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-utilities\") pod \"e895875e-1726-45be-95ae-ccf7c8656f92\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " Feb 26 22:51:20 crc kubenswrapper[4910]: I0226 22:51:20.934293 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-catalog-content\") pod \"e895875e-1726-45be-95ae-ccf7c8656f92\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " Feb 26 22:51:20 crc kubenswrapper[4910]: I0226 22:51:20.934382 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xrr9\" (UniqueName: \"kubernetes.io/projected/e895875e-1726-45be-95ae-ccf7c8656f92-kube-api-access-5xrr9\") pod \"e895875e-1726-45be-95ae-ccf7c8656f92\" (UID: \"e895875e-1726-45be-95ae-ccf7c8656f92\") " Feb 26 22:51:20 crc kubenswrapper[4910]: I0226 22:51:20.935113 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-utilities" (OuterVolumeSpecName: "utilities") pod "e895875e-1726-45be-95ae-ccf7c8656f92" (UID: "e895875e-1726-45be-95ae-ccf7c8656f92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:51:20 crc kubenswrapper[4910]: I0226 22:51:20.936260 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:51:20 crc kubenswrapper[4910]: I0226 22:51:20.943039 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e895875e-1726-45be-95ae-ccf7c8656f92-kube-api-access-5xrr9" (OuterVolumeSpecName: "kube-api-access-5xrr9") pod "e895875e-1726-45be-95ae-ccf7c8656f92" (UID: "e895875e-1726-45be-95ae-ccf7c8656f92"). InnerVolumeSpecName "kube-api-access-5xrr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:51:20 crc kubenswrapper[4910]: I0226 22:51:20.999435 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e895875e-1726-45be-95ae-ccf7c8656f92" (UID: "e895875e-1726-45be-95ae-ccf7c8656f92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.038023 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xrr9\" (UniqueName: \"kubernetes.io/projected/e895875e-1726-45be-95ae-ccf7c8656f92-kube-api-access-5xrr9\") on node \"crc\" DevicePath \"\"" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.038077 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e895875e-1726-45be-95ae-ccf7c8656f92-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.100685 4910 generic.go:334] "Generic (PLEG): container finished" podID="e895875e-1726-45be-95ae-ccf7c8656f92" containerID="a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747" exitCode=0 Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.100737 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h89rc" event={"ID":"e895875e-1726-45be-95ae-ccf7c8656f92","Type":"ContainerDied","Data":"a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747"} Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.100771 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h89rc" event={"ID":"e895875e-1726-45be-95ae-ccf7c8656f92","Type":"ContainerDied","Data":"90ba6be579f03e7bc291545096ddfe40ccb7e89f616774d10dbb94f11f9c0907"} Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.100793 4910 scope.go:117] "RemoveContainer" containerID="a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.100955 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h89rc" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.152261 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h89rc"] Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.153870 4910 scope.go:117] "RemoveContainer" containerID="685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.172964 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h89rc"] Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.190383 4910 scope.go:117] "RemoveContainer" containerID="b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.263555 4910 scope.go:117] "RemoveContainer" containerID="a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747" Feb 26 22:51:21 crc kubenswrapper[4910]: E0226 22:51:21.264217 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747\": container with ID starting with a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747 not found: ID does not exist" containerID="a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.264269 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747"} err="failed to get container status \"a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747\": rpc error: code = NotFound desc = could not find container \"a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747\": container with ID starting with a2f5e08b45452fb66f6289162fd01ddf13291095c93cc078bf9fbcf3e61a9747 not found: ID does not exist" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.264313 4910 scope.go:117] "RemoveContainer" containerID="685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da" Feb 26 22:51:21 crc kubenswrapper[4910]: E0226 22:51:21.265827 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da\": container with ID starting with 685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da not found: ID does not exist" containerID="685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.265857 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da"} err="failed to get container status \"685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da\": rpc error: code = NotFound desc = could not find container \"685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da\": container with ID starting with 685d61cbd663ec8e1f6b28947aa7cb27b0ae4b1ad441c8d9adef545f55d7e2da not found: ID does not exist" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.265875 4910 scope.go:117] "RemoveContainer" containerID="b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3" Feb 26 22:51:21 crc kubenswrapper[4910]: E0226 22:51:21.266510 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3\": container with ID starting with b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3 not found: ID does not exist" containerID="b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.266569 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3"} err="failed to get container status \"b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3\": rpc error: code = NotFound desc = could not find container \"b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3\": container with ID starting with b61106d6d0e17e03d25fb0f6f8abdb2ac3e11d735e6a409c4d11672036bd5df3 not found: ID does not exist" Feb 26 22:51:21 crc kubenswrapper[4910]: I0226 22:51:21.914257 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e895875e-1726-45be-95ae-ccf7c8656f92" path="/var/lib/kubelet/pods/e895875e-1726-45be-95ae-ccf7c8656f92/volumes" Feb 26 22:51:24 crc kubenswrapper[4910]: I0226 22:51:24.901642 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:51:24 crc kubenswrapper[4910]: E0226 22:51:24.902303 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.877971 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 22:51:29 crc kubenswrapper[4910]: E0226 22:51:29.880100 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e895875e-1726-45be-95ae-ccf7c8656f92" containerName="extract-content" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.880230 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e895875e-1726-45be-95ae-ccf7c8656f92" containerName="extract-content" Feb 26 22:51:29 crc kubenswrapper[4910]: E0226 22:51:29.880343 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e895875e-1726-45be-95ae-ccf7c8656f92" containerName="registry-server" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.880420 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e895875e-1726-45be-95ae-ccf7c8656f92" containerName="registry-server" Feb 26 22:51:29 crc kubenswrapper[4910]: E0226 22:51:29.880514 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e895875e-1726-45be-95ae-ccf7c8656f92" containerName="extract-utilities" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.880589 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e895875e-1726-45be-95ae-ccf7c8656f92" containerName="extract-utilities" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.880952 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e895875e-1726-45be-95ae-ccf7c8656f92" containerName="registry-server" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.881955 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.884859 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.884919 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q4fk8" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.885522 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.885847 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 22:51:29 crc kubenswrapper[4910]: I0226 22:51:29.923983 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.075642 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.075715 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.075764 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.075872 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2v8p\" (UniqueName: \"kubernetes.io/projected/221c5dbb-3674-4268-b698-109e2b97d374-kube-api-access-k2v8p\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.075896 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.075931 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.075999 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.076053 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-config-data\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.076152 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.178036 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.178212 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.178585 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.181321 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.181559 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.181831 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2v8p\" (UniqueName: \"kubernetes.io/projected/221c5dbb-3674-4268-b698-109e2b97d374-kube-api-access-k2v8p\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.181900 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.181991 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.182126 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.182219 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.182408 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-config-data\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.182604 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.183395 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.184513 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-config-data\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.189373 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.194269 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.196339 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.222774 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.222848 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2v8p\" (UniqueName: \"kubernetes.io/projected/221c5dbb-3674-4268-b698-109e2b97d374-kube-api-access-k2v8p\") pod \"tempest-tests-tempest\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " pod="openstack/tempest-tests-tempest" Feb 26 22:51:30 crc kubenswrapper[4910]: I0226 22:51:30.514820 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 22:51:31 crc kubenswrapper[4910]: I0226 22:51:31.055301 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 22:51:31 crc kubenswrapper[4910]: I0226 22:51:31.219290 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"221c5dbb-3674-4268-b698-109e2b97d374","Type":"ContainerStarted","Data":"a217e2a359b53330a74a43b2e5787986de8466319150172681d67b701bfba477"} Feb 26 22:51:35 crc kubenswrapper[4910]: I0226 22:51:35.912899 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:51:35 crc kubenswrapper[4910]: E0226 22:51:35.913523 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:51:50 crc kubenswrapper[4910]: I0226 22:51:50.901637 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:51:50 crc kubenswrapper[4910]: E0226 22:51:50.902325 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.150288 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535772-lsgj5"] Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.152182 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535772-lsgj5" Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.154390 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.154484 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.154971 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.162026 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535772-lsgj5"] Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.284863 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjcb\" (UniqueName: \"kubernetes.io/projected/426dfd59-d94d-4347-8117-569760596419-kube-api-access-ljjcb\") pod \"auto-csr-approver-29535772-lsgj5\" (UID: \"426dfd59-d94d-4347-8117-569760596419\") " pod="openshift-infra/auto-csr-approver-29535772-lsgj5" Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.387434 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljjcb\" (UniqueName: \"kubernetes.io/projected/426dfd59-d94d-4347-8117-569760596419-kube-api-access-ljjcb\") pod \"auto-csr-approver-29535772-lsgj5\" (UID: \"426dfd59-d94d-4347-8117-569760596419\") " pod="openshift-infra/auto-csr-approver-29535772-lsgj5" Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.412737 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljjcb\" (UniqueName: \"kubernetes.io/projected/426dfd59-d94d-4347-8117-569760596419-kube-api-access-ljjcb\") pod \"auto-csr-approver-29535772-lsgj5\" (UID: \"426dfd59-d94d-4347-8117-569760596419\") " pod="openshift-infra/auto-csr-approver-29535772-lsgj5" Feb 26 22:52:00 crc kubenswrapper[4910]: I0226 22:52:00.489134 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535772-lsgj5" Feb 26 22:52:04 crc kubenswrapper[4910]: I0226 22:52:04.901452 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:52:04 crc kubenswrapper[4910]: E0226 22:52:04.902683 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:52:06 crc kubenswrapper[4910]: E0226 22:52:06.519785 4910 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 26 22:52:06 crc kubenswrapper[4910]: E0226 22:52:06.520784 4910 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2v8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(221c5dbb-3674-4268-b698-109e2b97d374): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 22:52:06 crc kubenswrapper[4910]: E0226 22:52:06.521957 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="221c5dbb-3674-4268-b698-109e2b97d374" Feb 26 22:52:06 crc kubenswrapper[4910]: E0226 22:52:06.615246 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="221c5dbb-3674-4268-b698-109e2b97d374" Feb 26 22:52:06 crc kubenswrapper[4910]: I0226 22:52:06.936703 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535772-lsgj5"] Feb 26 22:52:07 crc kubenswrapper[4910]: I0226 22:52:07.620969 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535772-lsgj5" event={"ID":"426dfd59-d94d-4347-8117-569760596419","Type":"ContainerStarted","Data":"d515bc4223caae020602ef2aff331df0d79c497ea22de12624bd5fc7511948dd"} Feb 26 22:52:08 crc kubenswrapper[4910]: I0226 22:52:08.637940 4910 generic.go:334] "Generic (PLEG): container finished" podID="426dfd59-d94d-4347-8117-569760596419" containerID="4065d65d1868bd02a96cda3dc7bbc93ada1f79405c769ed38fbadd220db43b5f" exitCode=0 Feb 26 22:52:08 crc kubenswrapper[4910]: I0226 22:52:08.638051 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535772-lsgj5" event={"ID":"426dfd59-d94d-4347-8117-569760596419","Type":"ContainerDied","Data":"4065d65d1868bd02a96cda3dc7bbc93ada1f79405c769ed38fbadd220db43b5f"} Feb 26 22:52:10 crc kubenswrapper[4910]: I0226 22:52:10.109669 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535772-lsgj5" Feb 26 22:52:10 crc kubenswrapper[4910]: I0226 22:52:10.228564 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljjcb\" (UniqueName: \"kubernetes.io/projected/426dfd59-d94d-4347-8117-569760596419-kube-api-access-ljjcb\") pod \"426dfd59-d94d-4347-8117-569760596419\" (UID: \"426dfd59-d94d-4347-8117-569760596419\") " Feb 26 22:52:10 crc kubenswrapper[4910]: I0226 22:52:10.237551 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426dfd59-d94d-4347-8117-569760596419-kube-api-access-ljjcb" (OuterVolumeSpecName: "kube-api-access-ljjcb") pod "426dfd59-d94d-4347-8117-569760596419" (UID: "426dfd59-d94d-4347-8117-569760596419"). InnerVolumeSpecName "kube-api-access-ljjcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:52:10 crc kubenswrapper[4910]: I0226 22:52:10.332154 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljjcb\" (UniqueName: \"kubernetes.io/projected/426dfd59-d94d-4347-8117-569760596419-kube-api-access-ljjcb\") on node \"crc\" DevicePath \"\"" Feb 26 22:52:10 crc kubenswrapper[4910]: I0226 22:52:10.660119 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535772-lsgj5" event={"ID":"426dfd59-d94d-4347-8117-569760596419","Type":"ContainerDied","Data":"d515bc4223caae020602ef2aff331df0d79c497ea22de12624bd5fc7511948dd"} Feb 26 22:52:10 crc kubenswrapper[4910]: I0226 22:52:10.660561 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d515bc4223caae020602ef2aff331df0d79c497ea22de12624bd5fc7511948dd" Feb 26 22:52:10 crc kubenswrapper[4910]: I0226 22:52:10.660245 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535772-lsgj5" Feb 26 22:52:11 crc kubenswrapper[4910]: I0226 22:52:11.201778 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535766-tc9lz"] Feb 26 22:52:11 crc kubenswrapper[4910]: I0226 22:52:11.215377 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535766-tc9lz"] Feb 26 22:52:11 crc kubenswrapper[4910]: I0226 22:52:11.915011 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c67a2c7-bad2-416b-ba3a-9b81e770990e" path="/var/lib/kubelet/pods/3c67a2c7-bad2-416b-ba3a-9b81e770990e/volumes" Feb 26 22:52:19 crc kubenswrapper[4910]: I0226 22:52:19.902512 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:52:19 crc kubenswrapper[4910]: E0226 22:52:19.904358 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:52:20 crc kubenswrapper[4910]: I0226 22:52:20.384457 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 22:52:21 crc kubenswrapper[4910]: I0226 22:52:21.798563 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"221c5dbb-3674-4268-b698-109e2b97d374","Type":"ContainerStarted","Data":"4ac6fe31e94da904114c1615b0efc20a87d0d1e5294c0d9959f06000b410fa86"} Feb 26 22:52:21 crc kubenswrapper[4910]: I0226 22:52:21.828541 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.502174112 podStartE2EDuration="53.828514096s" podCreationTimestamp="2026-02-26 22:51:28 +0000 UTC" firstStartedPulling="2026-02-26 22:51:31.05446413 +0000 UTC m=+3376.133954701" lastFinishedPulling="2026-02-26 22:52:20.380804144 +0000 UTC m=+3425.460294685" observedRunningTime="2026-02-26 22:52:21.8172727 +0000 UTC m=+3426.896763271" watchObservedRunningTime="2026-02-26 22:52:21.828514096 +0000 UTC m=+3426.908004677" Feb 26 22:52:34 crc kubenswrapper[4910]: I0226 22:52:34.901849 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:52:34 crc kubenswrapper[4910]: E0226 22:52:34.904615 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:52:40 crc kubenswrapper[4910]: I0226 22:52:40.182963 4910 scope.go:117] "RemoveContainer" containerID="c57ffa546cf87c80392b341cfcf60f6afd0af9ec0d8bc5803110a843e3067e1a" Feb 26 22:52:49 crc kubenswrapper[4910]: I0226 22:52:49.902274 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:52:49 crc kubenswrapper[4910]: E0226 22:52:49.903462 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:53:01 crc kubenswrapper[4910]: I0226 22:53:01.902237 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:53:01 crc kubenswrapper[4910]: E0226 22:53:01.903040 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:53:12 crc kubenswrapper[4910]: I0226 22:53:12.903518 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:53:12 crc kubenswrapper[4910]: E0226 22:53:12.904454 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:53:26 crc kubenswrapper[4910]: I0226 22:53:26.901196 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:53:26 crc kubenswrapper[4910]: E0226 22:53:26.901955 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:53:40 crc kubenswrapper[4910]: I0226 22:53:40.901782 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:53:40 crc kubenswrapper[4910]: E0226 22:53:40.902545 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:53:52 crc kubenswrapper[4910]: I0226 22:53:52.901589 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:53:52 crc kubenswrapper[4910]: E0226 22:53:52.902357 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.364683 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sm7wv"] Feb 26 22:53:54 crc kubenswrapper[4910]: E0226 22:53:54.365332 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426dfd59-d94d-4347-8117-569760596419" containerName="oc" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.365344 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="426dfd59-d94d-4347-8117-569760596419" containerName="oc" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.365564 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="426dfd59-d94d-4347-8117-569760596419" containerName="oc" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.367085 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.387310 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sm7wv"] Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.480388 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzxtj\" (UniqueName: \"kubernetes.io/projected/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-kube-api-access-xzxtj\") pod \"redhat-operators-sm7wv\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.480494 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-utilities\") pod \"redhat-operators-sm7wv\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.480691 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-catalog-content\") pod \"redhat-operators-sm7wv\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.582942 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-utilities\") pod \"redhat-operators-sm7wv\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.583030 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-catalog-content\") pod \"redhat-operators-sm7wv\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.583200 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzxtj\" (UniqueName: \"kubernetes.io/projected/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-kube-api-access-xzxtj\") pod \"redhat-operators-sm7wv\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.583551 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-catalog-content\") pod \"redhat-operators-sm7wv\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.583592 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-utilities\") pod \"redhat-operators-sm7wv\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.611303 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzxtj\" (UniqueName: \"kubernetes.io/projected/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-kube-api-access-xzxtj\") pod \"redhat-operators-sm7wv\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:54 crc kubenswrapper[4910]: I0226 22:53:54.689256 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:53:55 crc kubenswrapper[4910]: I0226 22:53:55.195510 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sm7wv"] Feb 26 22:53:55 crc kubenswrapper[4910]: I0226 22:53:55.850947 4910 generic.go:334] "Generic (PLEG): container finished" podID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerID="38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483" exitCode=0 Feb 26 22:53:55 crc kubenswrapper[4910]: I0226 22:53:55.850995 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm7wv" event={"ID":"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f","Type":"ContainerDied","Data":"38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483"} Feb 26 22:53:55 crc kubenswrapper[4910]: I0226 22:53:55.851265 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm7wv" event={"ID":"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f","Type":"ContainerStarted","Data":"35cf05e6d3b7f842d28b2c21767e1f96d5f3af90a2bcb3cbb477d689fb697961"} Feb 26 22:53:56 crc kubenswrapper[4910]: I0226 22:53:56.867689 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm7wv" event={"ID":"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f","Type":"ContainerStarted","Data":"3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc"} Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.143043 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535774-585cs"] Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.144811 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535774-585cs" Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.152060 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.152376 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.153466 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.153970 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535774-585cs"] Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.231496 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zw2\" (UniqueName: \"kubernetes.io/projected/15c3b6d3-02af-4746-8a7e-5881a9ce595e-kube-api-access-f6zw2\") pod \"auto-csr-approver-29535774-585cs\" (UID: \"15c3b6d3-02af-4746-8a7e-5881a9ce595e\") " pod="openshift-infra/auto-csr-approver-29535774-585cs" Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.333790 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zw2\" (UniqueName: \"kubernetes.io/projected/15c3b6d3-02af-4746-8a7e-5881a9ce595e-kube-api-access-f6zw2\") pod \"auto-csr-approver-29535774-585cs\" (UID: \"15c3b6d3-02af-4746-8a7e-5881a9ce595e\") " pod="openshift-infra/auto-csr-approver-29535774-585cs" Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.355912 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zw2\" (UniqueName: \"kubernetes.io/projected/15c3b6d3-02af-4746-8a7e-5881a9ce595e-kube-api-access-f6zw2\") pod \"auto-csr-approver-29535774-585cs\" (UID: \"15c3b6d3-02af-4746-8a7e-5881a9ce595e\") " pod="openshift-infra/auto-csr-approver-29535774-585cs" Feb 26 22:54:00 crc kubenswrapper[4910]: I0226 22:54:00.465532 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535774-585cs" Feb 26 22:54:01 crc kubenswrapper[4910]: W0226 22:54:01.014114 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15c3b6d3_02af_4746_8a7e_5881a9ce595e.slice/crio-bd249197b89b22bf0d8a77cbf29aa3af7e991dd92474529f9bc0f30dffc7951b WatchSource:0}: Error finding container bd249197b89b22bf0d8a77cbf29aa3af7e991dd92474529f9bc0f30dffc7951b: Status 404 returned error can't find the container with id bd249197b89b22bf0d8a77cbf29aa3af7e991dd92474529f9bc0f30dffc7951b Feb 26 22:54:01 crc kubenswrapper[4910]: I0226 22:54:01.015900 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535774-585cs"] Feb 26 22:54:01 crc kubenswrapper[4910]: I0226 22:54:01.948873 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535774-585cs" event={"ID":"15c3b6d3-02af-4746-8a7e-5881a9ce595e","Type":"ContainerStarted","Data":"bd249197b89b22bf0d8a77cbf29aa3af7e991dd92474529f9bc0f30dffc7951b"} Feb 26 22:54:02 crc kubenswrapper[4910]: I0226 22:54:02.960399 4910 generic.go:334] "Generic (PLEG): container finished" podID="15c3b6d3-02af-4746-8a7e-5881a9ce595e" containerID="c8d3b791e760d99a13b1782e5ef512ce7700ad8eaa2cfa81808f32472d346d2f" exitCode=0 Feb 26 22:54:02 crc kubenswrapper[4910]: I0226 22:54:02.960498 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535774-585cs" event={"ID":"15c3b6d3-02af-4746-8a7e-5881a9ce595e","Type":"ContainerDied","Data":"c8d3b791e760d99a13b1782e5ef512ce7700ad8eaa2cfa81808f32472d346d2f"} Feb 26 22:54:02 crc kubenswrapper[4910]: I0226 22:54:02.964269 4910 generic.go:334] "Generic (PLEG): container finished" podID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerID="3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc" exitCode=0 Feb 26 22:54:02 crc kubenswrapper[4910]: I0226 22:54:02.964309 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm7wv" event={"ID":"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f","Type":"ContainerDied","Data":"3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc"} Feb 26 22:54:03 crc kubenswrapper[4910]: I0226 22:54:03.901642 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:54:03 crc kubenswrapper[4910]: I0226 22:54:03.977607 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm7wv" event={"ID":"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f","Type":"ContainerStarted","Data":"53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd"} Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.000075 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sm7wv" podStartSLOduration=2.493420414 podStartE2EDuration="10.00005723s" podCreationTimestamp="2026-02-26 22:53:54 +0000 UTC" firstStartedPulling="2026-02-26 22:53:55.852795884 +0000 UTC m=+3520.932286425" lastFinishedPulling="2026-02-26 22:54:03.3594327 +0000 UTC m=+3528.438923241" observedRunningTime="2026-02-26 22:54:03.991872818 +0000 UTC m=+3529.071363399" watchObservedRunningTime="2026-02-26 22:54:04.00005723 +0000 UTC m=+3529.079547781" Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.645263 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535774-585cs" Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.702152 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.702291 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.734223 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6zw2\" (UniqueName: \"kubernetes.io/projected/15c3b6d3-02af-4746-8a7e-5881a9ce595e-kube-api-access-f6zw2\") pod \"15c3b6d3-02af-4746-8a7e-5881a9ce595e\" (UID: \"15c3b6d3-02af-4746-8a7e-5881a9ce595e\") " Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.741910 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c3b6d3-02af-4746-8a7e-5881a9ce595e-kube-api-access-f6zw2" (OuterVolumeSpecName: "kube-api-access-f6zw2") pod "15c3b6d3-02af-4746-8a7e-5881a9ce595e" (UID: "15c3b6d3-02af-4746-8a7e-5881a9ce595e"). InnerVolumeSpecName "kube-api-access-f6zw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.836568 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6zw2\" (UniqueName: \"kubernetes.io/projected/15c3b6d3-02af-4746-8a7e-5881a9ce595e-kube-api-access-f6zw2\") on node \"crc\" DevicePath \"\"" Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.990397 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"71354e58e453bb98bc8e73f6f274dd8d77953aba228bbccd64f152573ebcdcb1"} Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.993375 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535774-585cs" event={"ID":"15c3b6d3-02af-4746-8a7e-5881a9ce595e","Type":"ContainerDied","Data":"bd249197b89b22bf0d8a77cbf29aa3af7e991dd92474529f9bc0f30dffc7951b"} Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.993429 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd249197b89b22bf0d8a77cbf29aa3af7e991dd92474529f9bc0f30dffc7951b" Feb 26 22:54:04 crc kubenswrapper[4910]: I0226 22:54:04.993391 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535774-585cs" Feb 26 22:54:05 crc kubenswrapper[4910]: I0226 22:54:05.733071 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535768-nbr6s"] Feb 26 22:54:05 crc kubenswrapper[4910]: I0226 22:54:05.747858 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535768-nbr6s"] Feb 26 22:54:05 crc kubenswrapper[4910]: I0226 22:54:05.764543 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sm7wv" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="registry-server" probeResult="failure" output=< Feb 26 22:54:05 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:54:05 crc kubenswrapper[4910]: > Feb 26 22:54:05 crc kubenswrapper[4910]: I0226 22:54:05.967100 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb17ae0-31dc-4ef5-b296-56e2c1980948" path="/var/lib/kubelet/pods/deb17ae0-31dc-4ef5-b296-56e2c1980948/volumes" Feb 26 22:54:15 crc kubenswrapper[4910]: I0226 22:54:15.766290 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sm7wv" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="registry-server" probeResult="failure" output=< Feb 26 22:54:15 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:54:15 crc kubenswrapper[4910]: > Feb 26 22:54:25 crc kubenswrapper[4910]: I0226 22:54:25.748242 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sm7wv" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="registry-server" probeResult="failure" output=< Feb 26 22:54:25 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 22:54:25 crc kubenswrapper[4910]: > Feb 26 22:54:34 crc kubenswrapper[4910]: I0226 22:54:34.746632 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:54:34 crc kubenswrapper[4910]: I0226 22:54:34.801556 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:54:34 crc kubenswrapper[4910]: I0226 22:54:34.989737 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sm7wv"] Feb 26 22:54:36 crc kubenswrapper[4910]: I0226 22:54:36.325712 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sm7wv" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="registry-server" containerID="cri-o://53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd" gracePeriod=2 Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.041103 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.153204 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-utilities\") pod \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.153378 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-catalog-content\") pod \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.154025 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-utilities" (OuterVolumeSpecName: "utilities") pod "dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" (UID: "dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.158397 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzxtj\" (UniqueName: \"kubernetes.io/projected/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-kube-api-access-xzxtj\") pod \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\" (UID: \"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f\") " Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.159373 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.169063 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-kube-api-access-xzxtj" (OuterVolumeSpecName: "kube-api-access-xzxtj") pod "dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" (UID: "dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f"). InnerVolumeSpecName "kube-api-access-xzxtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.260822 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzxtj\" (UniqueName: \"kubernetes.io/projected/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-kube-api-access-xzxtj\") on node \"crc\" DevicePath \"\"" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.291403 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" (UID: "dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.336947 4910 generic.go:334] "Generic (PLEG): container finished" podID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerID="53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd" exitCode=0 Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.336985 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm7wv" event={"ID":"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f","Type":"ContainerDied","Data":"53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd"} Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.337010 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sm7wv" event={"ID":"dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f","Type":"ContainerDied","Data":"35cf05e6d3b7f842d28b2c21767e1f96d5f3af90a2bcb3cbb477d689fb697961"} Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.337028 4910 scope.go:117] "RemoveContainer" containerID="53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.337197 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sm7wv" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.362942 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.378841 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sm7wv"] Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.382546 4910 scope.go:117] "RemoveContainer" containerID="3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.387789 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sm7wv"] Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.408449 4910 scope.go:117] "RemoveContainer" containerID="38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.451881 4910 scope.go:117] "RemoveContainer" containerID="53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd" Feb 26 22:54:37 crc kubenswrapper[4910]: E0226 22:54:37.452240 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd\": container with ID starting with 53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd not found: ID does not exist" containerID="53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.452284 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd"} err="failed to get container status \"53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd\": rpc error: code = NotFound desc = could not find container \"53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd\": container with ID starting with 53780fbdc5ec3e530d8f0372f8d9d288faadfb1e2ff03e2b9901d9a868ac2bfd not found: ID does not exist" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.452313 4910 scope.go:117] "RemoveContainer" containerID="3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc" Feb 26 22:54:37 crc kubenswrapper[4910]: E0226 22:54:37.452617 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc\": container with ID starting with 3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc not found: ID does not exist" containerID="3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.452648 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc"} err="failed to get container status \"3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc\": rpc error: code = NotFound desc = could not find container \"3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc\": container with ID starting with 3fa65247aea88d468923d60e895c41b4f8b832cac1820ba8fea320292595bbdc not found: ID does not exist" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.452669 4910 scope.go:117] "RemoveContainer" containerID="38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483" Feb 26 22:54:37 crc kubenswrapper[4910]: E0226 22:54:37.452887 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483\": container with ID starting with 38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483 not found: ID does not exist" containerID="38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.452913 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483"} err="failed to get container status \"38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483\": rpc error: code = NotFound desc = could not find container \"38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483\": container with ID starting with 38bcf57dd9ff5eb0dadca9a140660b31b34e1ea47cf0d72b88188ae744175483 not found: ID does not exist" Feb 26 22:54:37 crc kubenswrapper[4910]: I0226 22:54:37.912436 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" path="/var/lib/kubelet/pods/dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f/volumes" Feb 26 22:54:40 crc kubenswrapper[4910]: I0226 22:54:40.335883 4910 scope.go:117] "RemoveContainer" containerID="5b849f17460b7d7230c591a0d2993eac040d0792322cfe0223f5c2d280b6b888" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.861243 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kj5p4"] Feb 26 22:55:06 crc kubenswrapper[4910]: E0226 22:55:06.863603 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c3b6d3-02af-4746-8a7e-5881a9ce595e" containerName="oc" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.863628 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c3b6d3-02af-4746-8a7e-5881a9ce595e" containerName="oc" Feb 26 22:55:06 crc kubenswrapper[4910]: E0226 22:55:06.863658 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="extract-content" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.863666 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="extract-content" Feb 26 22:55:06 crc kubenswrapper[4910]: E0226 22:55:06.863698 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="extract-utilities" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.863708 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="extract-utilities" Feb 26 22:55:06 crc kubenswrapper[4910]: E0226 22:55:06.863720 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="registry-server" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.863727 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="registry-server" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.863961 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c3b6d3-02af-4746-8a7e-5881a9ce595e" containerName="oc" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.863990 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb1d47f-ed4d-4b94-a00c-e9e5a09b6e9f" containerName="registry-server" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.865901 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.873859 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj5p4"] Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.905004 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-utilities\") pod \"certified-operators-kj5p4\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.905611 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f825t\" (UniqueName: \"kubernetes.io/projected/e6b146d7-e267-4052-86bc-b1c8559976e3-kube-api-access-f825t\") pod \"certified-operators-kj5p4\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:06 crc kubenswrapper[4910]: I0226 22:55:06.905678 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-catalog-content\") pod \"certified-operators-kj5p4\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.007462 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f825t\" (UniqueName: \"kubernetes.io/projected/e6b146d7-e267-4052-86bc-b1c8559976e3-kube-api-access-f825t\") pod \"certified-operators-kj5p4\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.007521 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-catalog-content\") pod \"certified-operators-kj5p4\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.007556 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-utilities\") pod \"certified-operators-kj5p4\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.008048 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-utilities\") pod \"certified-operators-kj5p4\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.008143 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-catalog-content\") pod \"certified-operators-kj5p4\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.037323 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f825t\" (UniqueName: \"kubernetes.io/projected/e6b146d7-e267-4052-86bc-b1c8559976e3-kube-api-access-f825t\") pod \"certified-operators-kj5p4\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.194494 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.456382 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-khfkg"] Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.458668 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.472435 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khfkg"] Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.629681 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-catalog-content\") pod \"redhat-marketplace-khfkg\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.629800 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-utilities\") pod \"redhat-marketplace-khfkg\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.629853 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8c7m\" (UniqueName: \"kubernetes.io/projected/3f8bb196-182a-46f0-a885-7a5d86c9e40f-kube-api-access-t8c7m\") pod \"redhat-marketplace-khfkg\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.733034 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8c7m\" (UniqueName: \"kubernetes.io/projected/3f8bb196-182a-46f0-a885-7a5d86c9e40f-kube-api-access-t8c7m\") pod \"redhat-marketplace-khfkg\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.733134 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-catalog-content\") pod \"redhat-marketplace-khfkg\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.733234 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-utilities\") pod \"redhat-marketplace-khfkg\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.733972 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-utilities\") pod \"redhat-marketplace-khfkg\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.734000 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-catalog-content\") pod \"redhat-marketplace-khfkg\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.736019 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj5p4"] Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.775248 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8c7m\" (UniqueName: \"kubernetes.io/projected/3f8bb196-182a-46f0-a885-7a5d86c9e40f-kube-api-access-t8c7m\") pod \"redhat-marketplace-khfkg\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:07 crc kubenswrapper[4910]: I0226 22:55:07.833097 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:08 crc kubenswrapper[4910]: I0226 22:55:08.340063 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khfkg"] Feb 26 22:55:08 crc kubenswrapper[4910]: I0226 22:55:08.768371 4910 generic.go:334] "Generic (PLEG): container finished" podID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerID="bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f" exitCode=0 Feb 26 22:55:08 crc kubenswrapper[4910]: I0226 22:55:08.768439 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj5p4" event={"ID":"e6b146d7-e267-4052-86bc-b1c8559976e3","Type":"ContainerDied","Data":"bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f"} Feb 26 22:55:08 crc kubenswrapper[4910]: I0226 22:55:08.768566 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj5p4" event={"ID":"e6b146d7-e267-4052-86bc-b1c8559976e3","Type":"ContainerStarted","Data":"d3b45e549580b4e8d6ebe2d89235047a4d76a49569569719c7465938f6acfc58"} Feb 26 22:55:08 crc kubenswrapper[4910]: I0226 22:55:08.770596 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 22:55:08 crc kubenswrapper[4910]: I0226 22:55:08.771339 4910 generic.go:334] "Generic (PLEG): container finished" podID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerID="601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198" exitCode=0 Feb 26 22:55:08 crc kubenswrapper[4910]: I0226 22:55:08.771390 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkg" event={"ID":"3f8bb196-182a-46f0-a885-7a5d86c9e40f","Type":"ContainerDied","Data":"601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198"} Feb 26 22:55:08 crc kubenswrapper[4910]: I0226 22:55:08.771419 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkg" event={"ID":"3f8bb196-182a-46f0-a885-7a5d86c9e40f","Type":"ContainerStarted","Data":"097e19c3e05461ec7e29ba7369e24ac3b4d2aadbce657093b3af075366a13342"} Feb 26 22:55:09 crc kubenswrapper[4910]: I0226 22:55:09.782445 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkg" event={"ID":"3f8bb196-182a-46f0-a885-7a5d86c9e40f","Type":"ContainerStarted","Data":"de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2"} Feb 26 22:55:09 crc kubenswrapper[4910]: I0226 22:55:09.784344 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj5p4" event={"ID":"e6b146d7-e267-4052-86bc-b1c8559976e3","Type":"ContainerStarted","Data":"2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53"} Feb 26 22:55:10 crc kubenswrapper[4910]: I0226 22:55:10.796035 4910 generic.go:334] "Generic (PLEG): container finished" podID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerID="de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2" exitCode=0 Feb 26 22:55:10 crc kubenswrapper[4910]: I0226 22:55:10.796118 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkg" event={"ID":"3f8bb196-182a-46f0-a885-7a5d86c9e40f","Type":"ContainerDied","Data":"de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2"} Feb 26 22:55:11 crc kubenswrapper[4910]: I0226 22:55:11.806801 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkg" event={"ID":"3f8bb196-182a-46f0-a885-7a5d86c9e40f","Type":"ContainerStarted","Data":"685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24"} Feb 26 22:55:11 crc kubenswrapper[4910]: I0226 22:55:11.830520 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-khfkg" podStartSLOduration=2.283148388 podStartE2EDuration="4.830498018s" podCreationTimestamp="2026-02-26 22:55:07 +0000 UTC" firstStartedPulling="2026-02-26 22:55:08.772820892 +0000 UTC m=+3593.852311433" lastFinishedPulling="2026-02-26 22:55:11.320170522 +0000 UTC m=+3596.399661063" observedRunningTime="2026-02-26 22:55:11.827778124 +0000 UTC m=+3596.907268665" watchObservedRunningTime="2026-02-26 22:55:11.830498018 +0000 UTC m=+3596.909988589" Feb 26 22:55:12 crc kubenswrapper[4910]: I0226 22:55:12.819790 4910 generic.go:334] "Generic (PLEG): container finished" podID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerID="2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53" exitCode=0 Feb 26 22:55:12 crc kubenswrapper[4910]: I0226 22:55:12.819887 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj5p4" event={"ID":"e6b146d7-e267-4052-86bc-b1c8559976e3","Type":"ContainerDied","Data":"2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53"} Feb 26 22:55:13 crc kubenswrapper[4910]: I0226 22:55:13.834421 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj5p4" event={"ID":"e6b146d7-e267-4052-86bc-b1c8559976e3","Type":"ContainerStarted","Data":"b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3"} Feb 26 22:55:13 crc kubenswrapper[4910]: I0226 22:55:13.870857 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kj5p4" podStartSLOduration=3.414661045 podStartE2EDuration="7.870838553s" podCreationTimestamp="2026-02-26 22:55:06 +0000 UTC" firstStartedPulling="2026-02-26 22:55:08.770325684 +0000 UTC m=+3593.849816225" lastFinishedPulling="2026-02-26 22:55:13.226503182 +0000 UTC m=+3598.305993733" observedRunningTime="2026-02-26 22:55:13.858918029 +0000 UTC m=+3598.938408570" watchObservedRunningTime="2026-02-26 22:55:13.870838553 +0000 UTC m=+3598.950329104" Feb 26 22:55:17 crc kubenswrapper[4910]: I0226 22:55:17.195059 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:17 crc kubenswrapper[4910]: I0226 22:55:17.195674 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:17 crc kubenswrapper[4910]: I0226 22:55:17.293135 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:17 crc kubenswrapper[4910]: I0226 22:55:17.833598 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:17 crc kubenswrapper[4910]: I0226 22:55:17.835051 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:17 crc kubenswrapper[4910]: I0226 22:55:17.882361 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:18 crc kubenswrapper[4910]: I0226 22:55:18.944285 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:19 crc kubenswrapper[4910]: I0226 22:55:19.440384 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-khfkg"] Feb 26 22:55:20 crc kubenswrapper[4910]: I0226 22:55:20.902324 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-khfkg" podUID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerName="registry-server" containerID="cri-o://685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24" gracePeriod=2 Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.685618 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.844537 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-utilities\") pod \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.845397 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8c7m\" (UniqueName: \"kubernetes.io/projected/3f8bb196-182a-46f0-a885-7a5d86c9e40f-kube-api-access-t8c7m\") pod \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.845605 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-utilities" (OuterVolumeSpecName: "utilities") pod "3f8bb196-182a-46f0-a885-7a5d86c9e40f" (UID: "3f8bb196-182a-46f0-a885-7a5d86c9e40f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.845778 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-catalog-content\") pod \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\" (UID: \"3f8bb196-182a-46f0-a885-7a5d86c9e40f\") " Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.846647 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.851880 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8bb196-182a-46f0-a885-7a5d86c9e40f-kube-api-access-t8c7m" (OuterVolumeSpecName: "kube-api-access-t8c7m") pod "3f8bb196-182a-46f0-a885-7a5d86c9e40f" (UID: "3f8bb196-182a-46f0-a885-7a5d86c9e40f"). InnerVolumeSpecName "kube-api-access-t8c7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.880066 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f8bb196-182a-46f0-a885-7a5d86c9e40f" (UID: "3f8bb196-182a-46f0-a885-7a5d86c9e40f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.917396 4910 generic.go:334] "Generic (PLEG): container finished" podID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerID="685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24" exitCode=0 Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.918595 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khfkg" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.918855 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkg" event={"ID":"3f8bb196-182a-46f0-a885-7a5d86c9e40f","Type":"ContainerDied","Data":"685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24"} Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.918888 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkg" event={"ID":"3f8bb196-182a-46f0-a885-7a5d86c9e40f","Type":"ContainerDied","Data":"097e19c3e05461ec7e29ba7369e24ac3b4d2aadbce657093b3af075366a13342"} Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.918905 4910 scope.go:117] "RemoveContainer" containerID="685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.950494 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8c7m\" (UniqueName: \"kubernetes.io/projected/3f8bb196-182a-46f0-a885-7a5d86c9e40f-kube-api-access-t8c7m\") on node \"crc\" DevicePath \"\"" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.950543 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bb196-182a-46f0-a885-7a5d86c9e40f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.950986 4910 scope.go:117] "RemoveContainer" containerID="de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2" Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.969513 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-khfkg"] Feb 26 22:55:21 crc kubenswrapper[4910]: I0226 22:55:21.987414 4910 scope.go:117] "RemoveContainer" containerID="601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198" Feb 26 22:55:22 crc kubenswrapper[4910]: I0226 22:55:22.008326 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-khfkg"] Feb 26 22:55:22 crc kubenswrapper[4910]: I0226 22:55:22.076039 4910 scope.go:117] "RemoveContainer" containerID="685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24" Feb 26 22:55:22 crc kubenswrapper[4910]: E0226 22:55:22.076664 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24\": container with ID starting with 685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24 not found: ID does not exist" containerID="685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24" Feb 26 22:55:22 crc kubenswrapper[4910]: I0226 22:55:22.076753 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24"} err="failed to get container status \"685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24\": rpc error: code = NotFound desc = could not find container \"685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24\": container with ID starting with 685ce445b4776f9f8e62d86e9513f857a648dd18b3fe58506bcc3c17795d6f24 not found: ID does not exist" Feb 26 22:55:22 crc kubenswrapper[4910]: I0226 22:55:22.076831 4910 scope.go:117] "RemoveContainer" containerID="de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2" Feb 26 22:55:22 crc kubenswrapper[4910]: E0226 22:55:22.077894 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2\": container with ID starting with de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2 not found: ID does not exist" containerID="de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2" Feb 26 22:55:22 crc kubenswrapper[4910]: I0226 22:55:22.077951 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2"} err="failed to get container status \"de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2\": rpc error: code = NotFound desc = could not find container \"de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2\": container with ID starting with de4dd7b52511443eedc9bac1e60a96e0b5524760174e4b704c543542b365a8d2 not found: ID does not exist" Feb 26 22:55:22 crc kubenswrapper[4910]: I0226 22:55:22.077977 4910 scope.go:117] "RemoveContainer" containerID="601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198" Feb 26 22:55:22 crc kubenswrapper[4910]: E0226 22:55:22.079138 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198\": container with ID starting with 601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198 not found: ID does not exist" containerID="601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198" Feb 26 22:55:22 crc kubenswrapper[4910]: I0226 22:55:22.079241 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198"} err="failed to get container status \"601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198\": rpc error: code = NotFound desc = could not find container \"601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198\": container with ID starting with 601488c3dc5366d9f426f29094d4fd1fa82c0cc5f357b35e1602de110e741198 not found: ID does not exist" Feb 26 22:55:23 crc kubenswrapper[4910]: I0226 22:55:23.917175 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" path="/var/lib/kubelet/pods/3f8bb196-182a-46f0-a885-7a5d86c9e40f/volumes" Feb 26 22:55:27 crc kubenswrapper[4910]: I0226 22:55:27.276152 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:27 crc kubenswrapper[4910]: I0226 22:55:27.343739 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj5p4"] Feb 26 22:55:27 crc kubenswrapper[4910]: I0226 22:55:27.984864 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kj5p4" podUID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerName="registry-server" containerID="cri-o://b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3" gracePeriod=2 Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.765192 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.811536 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f825t\" (UniqueName: \"kubernetes.io/projected/e6b146d7-e267-4052-86bc-b1c8559976e3-kube-api-access-f825t\") pod \"e6b146d7-e267-4052-86bc-b1c8559976e3\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.811942 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-catalog-content\") pod \"e6b146d7-e267-4052-86bc-b1c8559976e3\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.812210 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-utilities\") pod \"e6b146d7-e267-4052-86bc-b1c8559976e3\" (UID: \"e6b146d7-e267-4052-86bc-b1c8559976e3\") " Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.814978 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-utilities" (OuterVolumeSpecName: "utilities") pod "e6b146d7-e267-4052-86bc-b1c8559976e3" (UID: "e6b146d7-e267-4052-86bc-b1c8559976e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.832078 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b146d7-e267-4052-86bc-b1c8559976e3-kube-api-access-f825t" (OuterVolumeSpecName: "kube-api-access-f825t") pod "e6b146d7-e267-4052-86bc-b1c8559976e3" (UID: "e6b146d7-e267-4052-86bc-b1c8559976e3"). InnerVolumeSpecName "kube-api-access-f825t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.883605 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6b146d7-e267-4052-86bc-b1c8559976e3" (UID: "e6b146d7-e267-4052-86bc-b1c8559976e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.915447 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f825t\" (UniqueName: \"kubernetes.io/projected/e6b146d7-e267-4052-86bc-b1c8559976e3-kube-api-access-f825t\") on node \"crc\" DevicePath \"\"" Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.915477 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.915486 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b146d7-e267-4052-86bc-b1c8559976e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.995994 4910 generic.go:334] "Generic (PLEG): container finished" podID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerID="b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3" exitCode=0 Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.996109 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj5p4" Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.996334 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj5p4" event={"ID":"e6b146d7-e267-4052-86bc-b1c8559976e3","Type":"ContainerDied","Data":"b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3"} Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.996422 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj5p4" event={"ID":"e6b146d7-e267-4052-86bc-b1c8559976e3","Type":"ContainerDied","Data":"d3b45e549580b4e8d6ebe2d89235047a4d76a49569569719c7465938f6acfc58"} Feb 26 22:55:28 crc kubenswrapper[4910]: I0226 22:55:28.996497 4910 scope.go:117] "RemoveContainer" containerID="b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.016907 4910 scope.go:117] "RemoveContainer" containerID="2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.044547 4910 scope.go:117] "RemoveContainer" containerID="bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.107125 4910 scope.go:117] "RemoveContainer" containerID="b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3" Feb 26 22:55:29 crc kubenswrapper[4910]: E0226 22:55:29.110277 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3\": container with ID starting with b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3 not found: ID does not exist" containerID="b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.110332 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3"} err="failed to get container status \"b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3\": rpc error: code = NotFound desc = could not find container \"b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3\": container with ID starting with b17f1fcd82ca4423e80d9d19de8869723ed680cec27324a2baed96cd05b2eae3 not found: ID does not exist" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.110362 4910 scope.go:117] "RemoveContainer" containerID="2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53" Feb 26 22:55:29 crc kubenswrapper[4910]: E0226 22:55:29.111603 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53\": container with ID starting with 2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53 not found: ID does not exist" containerID="2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.111779 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53"} err="failed to get container status \"2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53\": rpc error: code = NotFound desc = could not find container \"2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53\": container with ID starting with 2eb8516f29c819c74fd56d356c24e3607127d69fe5482d8407e6f282a0785c53 not found: ID does not exist" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.111939 4910 scope.go:117] "RemoveContainer" containerID="bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.114586 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj5p4"] Feb 26 22:55:29 crc kubenswrapper[4910]: E0226 22:55:29.117255 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f\": container with ID starting with bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f not found: ID does not exist" containerID="bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.117298 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f"} err="failed to get container status \"bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f\": rpc error: code = NotFound desc = could not find container \"bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f\": container with ID starting with bfd23d2cec65ad9c074eb0cb00e650908a8694fb0fd217223bdbea4ad0b13e3f not found: ID does not exist" Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.130524 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kj5p4"] Feb 26 22:55:29 crc kubenswrapper[4910]: I0226 22:55:29.914320 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b146d7-e267-4052-86bc-b1c8559976e3" path="/var/lib/kubelet/pods/e6b146d7-e267-4052-86bc-b1c8559976e3/volumes" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.173848 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535776-7872p"] Feb 26 22:56:00 crc kubenswrapper[4910]: E0226 22:56:00.175145 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerName="registry-server" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.175190 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerName="registry-server" Feb 26 22:56:00 crc kubenswrapper[4910]: E0226 22:56:00.175208 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerName="extract-utilities" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.175224 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerName="extract-utilities" Feb 26 22:56:00 crc kubenswrapper[4910]: E0226 22:56:00.175279 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerName="extract-utilities" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.175293 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerName="extract-utilities" Feb 26 22:56:00 crc kubenswrapper[4910]: E0226 22:56:00.175314 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerName="registry-server" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.175326 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerName="registry-server" Feb 26 22:56:00 crc kubenswrapper[4910]: E0226 22:56:00.175350 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerName="extract-content" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.175362 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerName="extract-content" Feb 26 22:56:00 crc kubenswrapper[4910]: E0226 22:56:00.175388 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerName="extract-content" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.175400 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerName="extract-content" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.175771 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8bb196-182a-46f0-a885-7a5d86c9e40f" containerName="registry-server" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.175821 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b146d7-e267-4052-86bc-b1c8559976e3" containerName="registry-server" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.177076 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535776-7872p" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.186639 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.186779 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.186946 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.217649 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535776-7872p"] Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.239986 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs5rr\" (UniqueName: \"kubernetes.io/projected/6aea1841-f34c-4d2a-8bbf-b1439a9ac745-kube-api-access-vs5rr\") pod \"auto-csr-approver-29535776-7872p\" (UID: \"6aea1841-f34c-4d2a-8bbf-b1439a9ac745\") " pod="openshift-infra/auto-csr-approver-29535776-7872p" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.342235 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs5rr\" (UniqueName: \"kubernetes.io/projected/6aea1841-f34c-4d2a-8bbf-b1439a9ac745-kube-api-access-vs5rr\") pod \"auto-csr-approver-29535776-7872p\" (UID: \"6aea1841-f34c-4d2a-8bbf-b1439a9ac745\") " pod="openshift-infra/auto-csr-approver-29535776-7872p" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.360807 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs5rr\" (UniqueName: \"kubernetes.io/projected/6aea1841-f34c-4d2a-8bbf-b1439a9ac745-kube-api-access-vs5rr\") pod \"auto-csr-approver-29535776-7872p\" (UID: \"6aea1841-f34c-4d2a-8bbf-b1439a9ac745\") " pod="openshift-infra/auto-csr-approver-29535776-7872p" Feb 26 22:56:00 crc kubenswrapper[4910]: I0226 22:56:00.515525 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535776-7872p" Feb 26 22:56:01 crc kubenswrapper[4910]: I0226 22:56:01.002029 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535776-7872p"] Feb 26 22:56:01 crc kubenswrapper[4910]: I0226 22:56:01.370206 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535776-7872p" event={"ID":"6aea1841-f34c-4d2a-8bbf-b1439a9ac745","Type":"ContainerStarted","Data":"8484b03fcc1731786d9280a5407514fb2946779cd19f5276737b26b4c4f94286"} Feb 26 22:56:03 crc kubenswrapper[4910]: I0226 22:56:03.392807 4910 generic.go:334] "Generic (PLEG): container finished" podID="6aea1841-f34c-4d2a-8bbf-b1439a9ac745" containerID="eeae3d7f4e60d8b5981e05006e8a7218251db5212f4b19ee1d4dfc3fc2df0465" exitCode=0 Feb 26 22:56:03 crc kubenswrapper[4910]: I0226 22:56:03.394259 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535776-7872p" event={"ID":"6aea1841-f34c-4d2a-8bbf-b1439a9ac745","Type":"ContainerDied","Data":"eeae3d7f4e60d8b5981e05006e8a7218251db5212f4b19ee1d4dfc3fc2df0465"} Feb 26 22:56:04 crc kubenswrapper[4910]: I0226 22:56:04.941193 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535776-7872p" Feb 26 22:56:04 crc kubenswrapper[4910]: I0226 22:56:04.949379 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs5rr\" (UniqueName: \"kubernetes.io/projected/6aea1841-f34c-4d2a-8bbf-b1439a9ac745-kube-api-access-vs5rr\") pod \"6aea1841-f34c-4d2a-8bbf-b1439a9ac745\" (UID: \"6aea1841-f34c-4d2a-8bbf-b1439a9ac745\") " Feb 26 22:56:04 crc kubenswrapper[4910]: I0226 22:56:04.955874 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aea1841-f34c-4d2a-8bbf-b1439a9ac745-kube-api-access-vs5rr" (OuterVolumeSpecName: "kube-api-access-vs5rr") pod "6aea1841-f34c-4d2a-8bbf-b1439a9ac745" (UID: "6aea1841-f34c-4d2a-8bbf-b1439a9ac745"). InnerVolumeSpecName "kube-api-access-vs5rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:56:05 crc kubenswrapper[4910]: I0226 22:56:05.051715 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs5rr\" (UniqueName: \"kubernetes.io/projected/6aea1841-f34c-4d2a-8bbf-b1439a9ac745-kube-api-access-vs5rr\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:05 crc kubenswrapper[4910]: I0226 22:56:05.417013 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535776-7872p" event={"ID":"6aea1841-f34c-4d2a-8bbf-b1439a9ac745","Type":"ContainerDied","Data":"8484b03fcc1731786d9280a5407514fb2946779cd19f5276737b26b4c4f94286"} Feb 26 22:56:05 crc kubenswrapper[4910]: I0226 22:56:05.417048 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8484b03fcc1731786d9280a5407514fb2946779cd19f5276737b26b4c4f94286" Feb 26 22:56:05 crc kubenswrapper[4910]: I0226 22:56:05.417092 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535776-7872p" Feb 26 22:56:06 crc kubenswrapper[4910]: I0226 22:56:06.013076 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535770-4bwth"] Feb 26 22:56:06 crc kubenswrapper[4910]: I0226 22:56:06.022254 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535770-4bwth"] Feb 26 22:56:07 crc kubenswrapper[4910]: I0226 22:56:07.915083 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b064433-c619-497a-b40b-fff570eb1331" path="/var/lib/kubelet/pods/7b064433-c619-497a-b40b-fff570eb1331/volumes" Feb 26 22:56:25 crc kubenswrapper[4910]: I0226 22:56:25.727470 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:56:25 crc kubenswrapper[4910]: I0226 22:56:25.728038 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:56:32 crc kubenswrapper[4910]: I0226 22:56:32.722590 4910 generic.go:334] "Generic (PLEG): container finished" podID="221c5dbb-3674-4268-b698-109e2b97d374" containerID="4ac6fe31e94da904114c1615b0efc20a87d0d1e5294c0d9959f06000b410fa86" exitCode=0 Feb 26 22:56:32 crc kubenswrapper[4910]: I0226 22:56:32.722691 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"221c5dbb-3674-4268-b698-109e2b97d374","Type":"ContainerDied","Data":"4ac6fe31e94da904114c1615b0efc20a87d0d1e5294c0d9959f06000b410fa86"} Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.339687 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.450493 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-config-data\") pod \"221c5dbb-3674-4268-b698-109e2b97d374\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.450768 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-temporary\") pod \"221c5dbb-3674-4268-b698-109e2b97d374\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.450827 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-workdir\") pod \"221c5dbb-3674-4268-b698-109e2b97d374\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.450860 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ssh-key\") pod \"221c5dbb-3674-4268-b698-109e2b97d374\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.451204 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "221c5dbb-3674-4268-b698-109e2b97d374" (UID: "221c5dbb-3674-4268-b698-109e2b97d374"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.451355 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-config-data" (OuterVolumeSpecName: "config-data") pod "221c5dbb-3674-4268-b698-109e2b97d374" (UID: "221c5dbb-3674-4268-b698-109e2b97d374"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.456256 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"221c5dbb-3674-4268-b698-109e2b97d374\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.456384 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config\") pod \"221c5dbb-3674-4268-b698-109e2b97d374\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.456558 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2v8p\" (UniqueName: \"kubernetes.io/projected/221c5dbb-3674-4268-b698-109e2b97d374-kube-api-access-k2v8p\") pod \"221c5dbb-3674-4268-b698-109e2b97d374\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.456594 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config-secret\") pod \"221c5dbb-3674-4268-b698-109e2b97d374\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.456721 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ca-certs\") pod \"221c5dbb-3674-4268-b698-109e2b97d374\" (UID: \"221c5dbb-3674-4268-b698-109e2b97d374\") " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.457677 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.457692 4910 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.468049 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "221c5dbb-3674-4268-b698-109e2b97d374" (UID: "221c5dbb-3674-4268-b698-109e2b97d374"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.468541 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221c5dbb-3674-4268-b698-109e2b97d374-kube-api-access-k2v8p" (OuterVolumeSpecName: "kube-api-access-k2v8p") pod "221c5dbb-3674-4268-b698-109e2b97d374" (UID: "221c5dbb-3674-4268-b698-109e2b97d374"). InnerVolumeSpecName "kube-api-access-k2v8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.482352 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "221c5dbb-3674-4268-b698-109e2b97d374" (UID: "221c5dbb-3674-4268-b698-109e2b97d374"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.482639 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "221c5dbb-3674-4268-b698-109e2b97d374" (UID: "221c5dbb-3674-4268-b698-109e2b97d374"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.495587 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "221c5dbb-3674-4268-b698-109e2b97d374" (UID: "221c5dbb-3674-4268-b698-109e2b97d374"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.524514 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "221c5dbb-3674-4268-b698-109e2b97d374" (UID: "221c5dbb-3674-4268-b698-109e2b97d374"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.559496 4910 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.559520 4910 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.559550 4910 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.559560 4910 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.559571 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2v8p\" (UniqueName: \"kubernetes.io/projected/221c5dbb-3674-4268-b698-109e2b97d374-kube-api-access-k2v8p\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.559580 4910 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/221c5dbb-3674-4268-b698-109e2b97d374-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.581543 4910 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.661813 4910 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.755709 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"221c5dbb-3674-4268-b698-109e2b97d374","Type":"ContainerDied","Data":"a217e2a359b53330a74a43b2e5787986de8466319150172681d67b701bfba477"} Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.755764 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a217e2a359b53330a74a43b2e5787986de8466319150172681d67b701bfba477" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.755812 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.884025 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "221c5dbb-3674-4268-b698-109e2b97d374" (UID: "221c5dbb-3674-4268-b698-109e2b97d374"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 22:56:34 crc kubenswrapper[4910]: I0226 22:56:34.968468 4910 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/221c5dbb-3674-4268-b698-109e2b97d374-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.448662 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 22:56:40 crc kubenswrapper[4910]: E0226 22:56:40.449493 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221c5dbb-3674-4268-b698-109e2b97d374" containerName="tempest-tests-tempest-tests-runner" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.449505 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="221c5dbb-3674-4268-b698-109e2b97d374" containerName="tempest-tests-tempest-tests-runner" Feb 26 22:56:40 crc kubenswrapper[4910]: E0226 22:56:40.449525 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aea1841-f34c-4d2a-8bbf-b1439a9ac745" containerName="oc" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.449531 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aea1841-f34c-4d2a-8bbf-b1439a9ac745" containerName="oc" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.449721 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aea1841-f34c-4d2a-8bbf-b1439a9ac745" containerName="oc" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.449732 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="221c5dbb-3674-4268-b698-109e2b97d374" containerName="tempest-tests-tempest-tests-runner" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.450862 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.454023 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q4fk8" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.463684 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.482860 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b70555d9-49c4-455f-8991-996047c0c0a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.483052 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hspkr\" (UniqueName: \"kubernetes.io/projected/b70555d9-49c4-455f-8991-996047c0c0a2-kube-api-access-hspkr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b70555d9-49c4-455f-8991-996047c0c0a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.568722 4910 scope.go:117] "RemoveContainer" containerID="eb19e770bad290d41cc0b95dec8a311e76eb8fab9d99fee311ed7dd68c84de04" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.584476 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b70555d9-49c4-455f-8991-996047c0c0a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.584579 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hspkr\" (UniqueName: \"kubernetes.io/projected/b70555d9-49c4-455f-8991-996047c0c0a2-kube-api-access-hspkr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b70555d9-49c4-455f-8991-996047c0c0a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.585509 4910 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b70555d9-49c4-455f-8991-996047c0c0a2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.602594 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hspkr\" (UniqueName: \"kubernetes.io/projected/b70555d9-49c4-455f-8991-996047c0c0a2-kube-api-access-hspkr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b70555d9-49c4-455f-8991-996047c0c0a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.619763 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b70555d9-49c4-455f-8991-996047c0c0a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 22:56:40 crc kubenswrapper[4910]: I0226 22:56:40.785939 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 22:56:41 crc kubenswrapper[4910]: I0226 22:56:41.356711 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 22:56:41 crc kubenswrapper[4910]: I0226 22:56:41.843718 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b70555d9-49c4-455f-8991-996047c0c0a2","Type":"ContainerStarted","Data":"2217d376e563bc6dc191288da792e92a0ff781def79ff3bac178bc136aae9f5b"} Feb 26 22:56:42 crc kubenswrapper[4910]: I0226 22:56:42.862723 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b70555d9-49c4-455f-8991-996047c0c0a2","Type":"ContainerStarted","Data":"0d91dec2bc608e9318439bb8470afe01f37ad373eb2cc61cf88fa14f68415f7e"} Feb 26 22:56:42 crc kubenswrapper[4910]: I0226 22:56:42.901636 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.054792027 podStartE2EDuration="2.901609259s" podCreationTimestamp="2026-02-26 22:56:40 +0000 UTC" firstStartedPulling="2026-02-26 22:56:41.365182052 +0000 UTC m=+3686.444672593" lastFinishedPulling="2026-02-26 22:56:42.211999274 +0000 UTC m=+3687.291489825" observedRunningTime="2026-02-26 22:56:42.88257173 +0000 UTC m=+3687.962062331" watchObservedRunningTime="2026-02-26 22:56:42.901609259 +0000 UTC m=+3687.981099840" Feb 26 22:56:55 crc kubenswrapper[4910]: I0226 22:56:55.727026 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:56:55 crc kubenswrapper[4910]: I0226 22:56:55.727732 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:57:25 crc kubenswrapper[4910]: I0226 22:57:25.727216 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:57:25 crc kubenswrapper[4910]: I0226 22:57:25.727971 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 22:57:25 crc kubenswrapper[4910]: I0226 22:57:25.728048 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 22:57:25 crc kubenswrapper[4910]: I0226 22:57:25.729464 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71354e58e453bb98bc8e73f6f274dd8d77953aba228bbccd64f152573ebcdcb1"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 22:57:25 crc kubenswrapper[4910]: I0226 22:57:25.729851 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://71354e58e453bb98bc8e73f6f274dd8d77953aba228bbccd64f152573ebcdcb1" gracePeriod=600 Feb 26 22:57:25 crc kubenswrapper[4910]: I0226 22:57:25.949538 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="71354e58e453bb98bc8e73f6f274dd8d77953aba228bbccd64f152573ebcdcb1" exitCode=0 Feb 26 22:57:25 crc kubenswrapper[4910]: I0226 22:57:25.949852 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"71354e58e453bb98bc8e73f6f274dd8d77953aba228bbccd64f152573ebcdcb1"} Feb 26 22:57:25 crc kubenswrapper[4910]: I0226 22:57:25.950070 4910 scope.go:117] "RemoveContainer" containerID="ac0e56afda7716139c18fd1ef79b83fd243b640f632b3d6de76c442c14fa4526" Feb 26 22:57:26 crc kubenswrapper[4910]: I0226 22:57:26.861389 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bjtlp/must-gather-5tjg4"] Feb 26 22:57:26 crc kubenswrapper[4910]: I0226 22:57:26.864548 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 22:57:26 crc kubenswrapper[4910]: I0226 22:57:26.866354 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bjtlp"/"default-dockercfg-vhxmj" Feb 26 22:57:26 crc kubenswrapper[4910]: I0226 22:57:26.870802 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bjtlp"/"kube-root-ca.crt" Feb 26 22:57:26 crc kubenswrapper[4910]: I0226 22:57:26.871984 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bjtlp"/"openshift-service-ca.crt" Feb 26 22:57:26 crc kubenswrapper[4910]: I0226 22:57:26.877184 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bjtlp/must-gather-5tjg4"] Feb 26 22:57:26 crc kubenswrapper[4910]: I0226 22:57:26.960698 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f"} Feb 26 22:57:27 crc kubenswrapper[4910]: I0226 22:57:27.033152 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51be9c06-a324-4431-8114-2a8d2cc41902-must-gather-output\") pod \"must-gather-5tjg4\" (UID: \"51be9c06-a324-4431-8114-2a8d2cc41902\") " pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 22:57:27 crc kubenswrapper[4910]: I0226 22:57:27.033206 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds72n\" (UniqueName: \"kubernetes.io/projected/51be9c06-a324-4431-8114-2a8d2cc41902-kube-api-access-ds72n\") pod \"must-gather-5tjg4\" (UID: \"51be9c06-a324-4431-8114-2a8d2cc41902\") " pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 22:57:27 crc kubenswrapper[4910]: I0226 22:57:27.135653 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51be9c06-a324-4431-8114-2a8d2cc41902-must-gather-output\") pod \"must-gather-5tjg4\" (UID: \"51be9c06-a324-4431-8114-2a8d2cc41902\") " pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 22:57:27 crc kubenswrapper[4910]: I0226 22:57:27.136254 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds72n\" (UniqueName: \"kubernetes.io/projected/51be9c06-a324-4431-8114-2a8d2cc41902-kube-api-access-ds72n\") pod \"must-gather-5tjg4\" (UID: \"51be9c06-a324-4431-8114-2a8d2cc41902\") " pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 22:57:27 crc kubenswrapper[4910]: I0226 22:57:27.136212 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51be9c06-a324-4431-8114-2a8d2cc41902-must-gather-output\") pod \"must-gather-5tjg4\" (UID: \"51be9c06-a324-4431-8114-2a8d2cc41902\") " pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 22:57:27 crc kubenswrapper[4910]: I0226 22:57:27.153057 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds72n\" (UniqueName: \"kubernetes.io/projected/51be9c06-a324-4431-8114-2a8d2cc41902-kube-api-access-ds72n\") pod \"must-gather-5tjg4\" (UID: \"51be9c06-a324-4431-8114-2a8d2cc41902\") " pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 22:57:27 crc kubenswrapper[4910]: I0226 22:57:27.181937 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 22:57:27 crc kubenswrapper[4910]: I0226 22:57:27.742443 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bjtlp/must-gather-5tjg4"] Feb 26 22:57:27 crc kubenswrapper[4910]: I0226 22:57:27.969589 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" event={"ID":"51be9c06-a324-4431-8114-2a8d2cc41902","Type":"ContainerStarted","Data":"5f209d69920bef119550bfe977ceeae9f27e8a7abd06a17b96f33601e6d2d55a"} Feb 26 22:57:35 crc kubenswrapper[4910]: I0226 22:57:35.045388 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" event={"ID":"51be9c06-a324-4431-8114-2a8d2cc41902","Type":"ContainerStarted","Data":"5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f"} Feb 26 22:57:36 crc kubenswrapper[4910]: I0226 22:57:36.058283 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" event={"ID":"51be9c06-a324-4431-8114-2a8d2cc41902","Type":"ContainerStarted","Data":"36439b590c190c594636bb59d49ea4324da2e4cc1fa467189b0555803ac81dfc"} Feb 26 22:57:36 crc kubenswrapper[4910]: I0226 22:57:36.084481 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" podStartSLOduration=3.180649008 podStartE2EDuration="10.084460963s" podCreationTimestamp="2026-02-26 22:57:26 +0000 UTC" firstStartedPulling="2026-02-26 22:57:27.760798195 +0000 UTC m=+3732.840288736" lastFinishedPulling="2026-02-26 22:57:34.66461014 +0000 UTC m=+3739.744100691" observedRunningTime="2026-02-26 22:57:36.080629938 +0000 UTC m=+3741.160120509" watchObservedRunningTime="2026-02-26 22:57:36.084460963 +0000 UTC m=+3741.163951494" Feb 26 22:57:38 crc kubenswrapper[4910]: I0226 22:57:38.660043 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bjtlp/crc-debug-nkmjd"] Feb 26 22:57:38 crc kubenswrapper[4910]: I0226 22:57:38.662020 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:57:38 crc kubenswrapper[4910]: I0226 22:57:38.805273 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d66b31b7-1044-4795-b74f-840971a4ebca-host\") pod \"crc-debug-nkmjd\" (UID: \"d66b31b7-1044-4795-b74f-840971a4ebca\") " pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:57:38 crc kubenswrapper[4910]: I0226 22:57:38.805672 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k876\" (UniqueName: \"kubernetes.io/projected/d66b31b7-1044-4795-b74f-840971a4ebca-kube-api-access-7k876\") pod \"crc-debug-nkmjd\" (UID: \"d66b31b7-1044-4795-b74f-840971a4ebca\") " pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:57:38 crc kubenswrapper[4910]: I0226 22:57:38.907696 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k876\" (UniqueName: \"kubernetes.io/projected/d66b31b7-1044-4795-b74f-840971a4ebca-kube-api-access-7k876\") pod \"crc-debug-nkmjd\" (UID: \"d66b31b7-1044-4795-b74f-840971a4ebca\") " pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:57:38 crc kubenswrapper[4910]: I0226 22:57:38.907793 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d66b31b7-1044-4795-b74f-840971a4ebca-host\") pod \"crc-debug-nkmjd\" (UID: \"d66b31b7-1044-4795-b74f-840971a4ebca\") " pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:57:38 crc kubenswrapper[4910]: I0226 22:57:38.907945 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d66b31b7-1044-4795-b74f-840971a4ebca-host\") pod \"crc-debug-nkmjd\" (UID: \"d66b31b7-1044-4795-b74f-840971a4ebca\") " pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:57:38 crc kubenswrapper[4910]: I0226 22:57:38.929898 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k876\" (UniqueName: \"kubernetes.io/projected/d66b31b7-1044-4795-b74f-840971a4ebca-kube-api-access-7k876\") pod \"crc-debug-nkmjd\" (UID: \"d66b31b7-1044-4795-b74f-840971a4ebca\") " pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:57:38 crc kubenswrapper[4910]: I0226 22:57:38.994032 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:57:39 crc kubenswrapper[4910]: W0226 22:57:39.039909 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd66b31b7_1044_4795_b74f_840971a4ebca.slice/crio-1ce63297743c2f603bfda91dfd75361219ba78a4626fdc1a4866c95d509c3ff6 WatchSource:0}: Error finding container 1ce63297743c2f603bfda91dfd75361219ba78a4626fdc1a4866c95d509c3ff6: Status 404 returned error can't find the container with id 1ce63297743c2f603bfda91dfd75361219ba78a4626fdc1a4866c95d509c3ff6 Feb 26 22:57:39 crc kubenswrapper[4910]: I0226 22:57:39.095802 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" event={"ID":"d66b31b7-1044-4795-b74f-840971a4ebca","Type":"ContainerStarted","Data":"1ce63297743c2f603bfda91dfd75361219ba78a4626fdc1a4866c95d509c3ff6"} Feb 26 22:57:51 crc kubenswrapper[4910]: I0226 22:57:51.232669 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" event={"ID":"d66b31b7-1044-4795-b74f-840971a4ebca","Type":"ContainerStarted","Data":"438285643c2b73f5069f72242324d5aa1501d24a52d98cd0e4ef6f09e2e09f7f"} Feb 26 22:57:51 crc kubenswrapper[4910]: I0226 22:57:51.261722 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" podStartSLOduration=1.5929883249999999 podStartE2EDuration="13.261705337s" podCreationTimestamp="2026-02-26 22:57:38 +0000 UTC" firstStartedPulling="2026-02-26 22:57:39.0437271 +0000 UTC m=+3744.123217641" lastFinishedPulling="2026-02-26 22:57:50.712444112 +0000 UTC m=+3755.791934653" observedRunningTime="2026-02-26 22:57:51.250531883 +0000 UTC m=+3756.330022444" watchObservedRunningTime="2026-02-26 22:57:51.261705337 +0000 UTC m=+3756.341195868" Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.142682 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535778-nshc2"] Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.145698 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535778-nshc2" Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.148854 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.149644 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.149782 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.155985 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535778-nshc2"] Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.248181 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clxqt\" (UniqueName: \"kubernetes.io/projected/133d6b21-57a2-41dd-a617-548380fb2754-kube-api-access-clxqt\") pod \"auto-csr-approver-29535778-nshc2\" (UID: \"133d6b21-57a2-41dd-a617-548380fb2754\") " pod="openshift-infra/auto-csr-approver-29535778-nshc2" Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.350256 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clxqt\" (UniqueName: \"kubernetes.io/projected/133d6b21-57a2-41dd-a617-548380fb2754-kube-api-access-clxqt\") pod \"auto-csr-approver-29535778-nshc2\" (UID: \"133d6b21-57a2-41dd-a617-548380fb2754\") " pod="openshift-infra/auto-csr-approver-29535778-nshc2" Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.379458 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clxqt\" (UniqueName: \"kubernetes.io/projected/133d6b21-57a2-41dd-a617-548380fb2754-kube-api-access-clxqt\") pod \"auto-csr-approver-29535778-nshc2\" (UID: \"133d6b21-57a2-41dd-a617-548380fb2754\") " pod="openshift-infra/auto-csr-approver-29535778-nshc2" Feb 26 22:58:00 crc kubenswrapper[4910]: I0226 22:58:00.464851 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535778-nshc2" Feb 26 22:58:01 crc kubenswrapper[4910]: I0226 22:58:01.040975 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535778-nshc2"] Feb 26 22:58:01 crc kubenswrapper[4910]: W0226 22:58:01.085795 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod133d6b21_57a2_41dd_a617_548380fb2754.slice/crio-b222f1fa3a04286f66e276dc8a17084519ed64e99359ad405433ca724fc67ba6 WatchSource:0}: Error finding container b222f1fa3a04286f66e276dc8a17084519ed64e99359ad405433ca724fc67ba6: Status 404 returned error can't find the container with id b222f1fa3a04286f66e276dc8a17084519ed64e99359ad405433ca724fc67ba6 Feb 26 22:58:01 crc kubenswrapper[4910]: I0226 22:58:01.332052 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535778-nshc2" event={"ID":"133d6b21-57a2-41dd-a617-548380fb2754","Type":"ContainerStarted","Data":"b222f1fa3a04286f66e276dc8a17084519ed64e99359ad405433ca724fc67ba6"} Feb 26 22:58:04 crc kubenswrapper[4910]: I0226 22:58:04.366706 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535778-nshc2" event={"ID":"133d6b21-57a2-41dd-a617-548380fb2754","Type":"ContainerStarted","Data":"e21a5fd2bc7b1b9abb56239d04bd4a61e207cdc6ff594f89cd3d50871c5871ff"} Feb 26 22:58:04 crc kubenswrapper[4910]: I0226 22:58:04.382923 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535778-nshc2" podStartSLOduration=3.205324352 podStartE2EDuration="4.382896432s" podCreationTimestamp="2026-02-26 22:58:00 +0000 UTC" firstStartedPulling="2026-02-26 22:58:01.094352815 +0000 UTC m=+3766.173843366" lastFinishedPulling="2026-02-26 22:58:02.271924905 +0000 UTC m=+3767.351415446" observedRunningTime="2026-02-26 22:58:04.382200983 +0000 UTC m=+3769.461691534" watchObservedRunningTime="2026-02-26 22:58:04.382896432 +0000 UTC m=+3769.462387013" Feb 26 22:58:05 crc kubenswrapper[4910]: I0226 22:58:05.375584 4910 generic.go:334] "Generic (PLEG): container finished" podID="133d6b21-57a2-41dd-a617-548380fb2754" containerID="e21a5fd2bc7b1b9abb56239d04bd4a61e207cdc6ff594f89cd3d50871c5871ff" exitCode=0 Feb 26 22:58:05 crc kubenswrapper[4910]: I0226 22:58:05.375663 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535778-nshc2" event={"ID":"133d6b21-57a2-41dd-a617-548380fb2754","Type":"ContainerDied","Data":"e21a5fd2bc7b1b9abb56239d04bd4a61e207cdc6ff594f89cd3d50871c5871ff"} Feb 26 22:58:06 crc kubenswrapper[4910]: I0226 22:58:06.783405 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535778-nshc2" Feb 26 22:58:06 crc kubenswrapper[4910]: I0226 22:58:06.815073 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clxqt\" (UniqueName: \"kubernetes.io/projected/133d6b21-57a2-41dd-a617-548380fb2754-kube-api-access-clxqt\") pod \"133d6b21-57a2-41dd-a617-548380fb2754\" (UID: \"133d6b21-57a2-41dd-a617-548380fb2754\") " Feb 26 22:58:06 crc kubenswrapper[4910]: I0226 22:58:06.820654 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133d6b21-57a2-41dd-a617-548380fb2754-kube-api-access-clxqt" (OuterVolumeSpecName: "kube-api-access-clxqt") pod "133d6b21-57a2-41dd-a617-548380fb2754" (UID: "133d6b21-57a2-41dd-a617-548380fb2754"). InnerVolumeSpecName "kube-api-access-clxqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:58:06 crc kubenswrapper[4910]: I0226 22:58:06.919199 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clxqt\" (UniqueName: \"kubernetes.io/projected/133d6b21-57a2-41dd-a617-548380fb2754-kube-api-access-clxqt\") on node \"crc\" DevicePath \"\"" Feb 26 22:58:07 crc kubenswrapper[4910]: I0226 22:58:07.393716 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535778-nshc2" event={"ID":"133d6b21-57a2-41dd-a617-548380fb2754","Type":"ContainerDied","Data":"b222f1fa3a04286f66e276dc8a17084519ed64e99359ad405433ca724fc67ba6"} Feb 26 22:58:07 crc kubenswrapper[4910]: I0226 22:58:07.393961 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b222f1fa3a04286f66e276dc8a17084519ed64e99359ad405433ca724fc67ba6" Feb 26 22:58:07 crc kubenswrapper[4910]: I0226 22:58:07.393774 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535778-nshc2" Feb 26 22:58:07 crc kubenswrapper[4910]: I0226 22:58:07.462432 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535772-lsgj5"] Feb 26 22:58:07 crc kubenswrapper[4910]: I0226 22:58:07.471764 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535772-lsgj5"] Feb 26 22:58:07 crc kubenswrapper[4910]: I0226 22:58:07.913082 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="426dfd59-d94d-4347-8117-569760596419" path="/var/lib/kubelet/pods/426dfd59-d94d-4347-8117-569760596419/volumes" Feb 26 22:58:40 crc kubenswrapper[4910]: I0226 22:58:40.699238 4910 scope.go:117] "RemoveContainer" containerID="4065d65d1868bd02a96cda3dc7bbc93ada1f79405c769ed38fbadd220db43b5f" Feb 26 22:58:42 crc kubenswrapper[4910]: I0226 22:58:42.736802 4910 generic.go:334] "Generic (PLEG): container finished" podID="d66b31b7-1044-4795-b74f-840971a4ebca" containerID="438285643c2b73f5069f72242324d5aa1501d24a52d98cd0e4ef6f09e2e09f7f" exitCode=0 Feb 26 22:58:42 crc kubenswrapper[4910]: I0226 22:58:42.736947 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" event={"ID":"d66b31b7-1044-4795-b74f-840971a4ebca","Type":"ContainerDied","Data":"438285643c2b73f5069f72242324d5aa1501d24a52d98cd0e4ef6f09e2e09f7f"} Feb 26 22:58:43 crc kubenswrapper[4910]: I0226 22:58:43.868467 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:58:43 crc kubenswrapper[4910]: I0226 22:58:43.900193 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bjtlp/crc-debug-nkmjd"] Feb 26 22:58:43 crc kubenswrapper[4910]: I0226 22:58:43.918110 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bjtlp/crc-debug-nkmjd"] Feb 26 22:58:44 crc kubenswrapper[4910]: I0226 22:58:44.009078 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k876\" (UniqueName: \"kubernetes.io/projected/d66b31b7-1044-4795-b74f-840971a4ebca-kube-api-access-7k876\") pod \"d66b31b7-1044-4795-b74f-840971a4ebca\" (UID: \"d66b31b7-1044-4795-b74f-840971a4ebca\") " Feb 26 22:58:44 crc kubenswrapper[4910]: I0226 22:58:44.009232 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d66b31b7-1044-4795-b74f-840971a4ebca-host\") pod \"d66b31b7-1044-4795-b74f-840971a4ebca\" (UID: \"d66b31b7-1044-4795-b74f-840971a4ebca\") " Feb 26 22:58:44 crc kubenswrapper[4910]: I0226 22:58:44.009349 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d66b31b7-1044-4795-b74f-840971a4ebca-host" (OuterVolumeSpecName: "host") pod "d66b31b7-1044-4795-b74f-840971a4ebca" (UID: "d66b31b7-1044-4795-b74f-840971a4ebca"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:58:44 crc kubenswrapper[4910]: I0226 22:58:44.009810 4910 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d66b31b7-1044-4795-b74f-840971a4ebca-host\") on node \"crc\" DevicePath \"\"" Feb 26 22:58:44 crc kubenswrapper[4910]: I0226 22:58:44.016421 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66b31b7-1044-4795-b74f-840971a4ebca-kube-api-access-7k876" (OuterVolumeSpecName: "kube-api-access-7k876") pod "d66b31b7-1044-4795-b74f-840971a4ebca" (UID: "d66b31b7-1044-4795-b74f-840971a4ebca"). InnerVolumeSpecName "kube-api-access-7k876". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:58:44 crc kubenswrapper[4910]: I0226 22:58:44.111989 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k876\" (UniqueName: \"kubernetes.io/projected/d66b31b7-1044-4795-b74f-840971a4ebca-kube-api-access-7k876\") on node \"crc\" DevicePath \"\"" Feb 26 22:58:44 crc kubenswrapper[4910]: I0226 22:58:44.761443 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ce63297743c2f603bfda91dfd75361219ba78a4626fdc1a4866c95d509c3ff6" Feb 26 22:58:44 crc kubenswrapper[4910]: I0226 22:58:44.761503 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-nkmjd" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.174305 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bjtlp/crc-debug-kmwtf"] Feb 26 22:58:45 crc kubenswrapper[4910]: E0226 22:58:45.175905 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66b31b7-1044-4795-b74f-840971a4ebca" containerName="container-00" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.175952 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66b31b7-1044-4795-b74f-840971a4ebca" containerName="container-00" Feb 26 22:58:45 crc kubenswrapper[4910]: E0226 22:58:45.175991 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133d6b21-57a2-41dd-a617-548380fb2754" containerName="oc" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.176000 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="133d6b21-57a2-41dd-a617-548380fb2754" containerName="oc" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.176276 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66b31b7-1044-4795-b74f-840971a4ebca" containerName="container-00" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.176331 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="133d6b21-57a2-41dd-a617-548380fb2754" containerName="oc" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.177280 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.235851 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kz2v\" (UniqueName: \"kubernetes.io/projected/c522e9a6-63c6-4272-b64e-78ac1ed985b7-kube-api-access-4kz2v\") pod \"crc-debug-kmwtf\" (UID: \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\") " pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.235924 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c522e9a6-63c6-4272-b64e-78ac1ed985b7-host\") pod \"crc-debug-kmwtf\" (UID: \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\") " pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.338386 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kz2v\" (UniqueName: \"kubernetes.io/projected/c522e9a6-63c6-4272-b64e-78ac1ed985b7-kube-api-access-4kz2v\") pod \"crc-debug-kmwtf\" (UID: \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\") " pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.338685 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c522e9a6-63c6-4272-b64e-78ac1ed985b7-host\") pod \"crc-debug-kmwtf\" (UID: \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\") " pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.338774 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c522e9a6-63c6-4272-b64e-78ac1ed985b7-host\") pod \"crc-debug-kmwtf\" (UID: \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\") " pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.358921 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kz2v\" (UniqueName: \"kubernetes.io/projected/c522e9a6-63c6-4272-b64e-78ac1ed985b7-kube-api-access-4kz2v\") pod \"crc-debug-kmwtf\" (UID: \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\") " pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.498805 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.773595 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" event={"ID":"c522e9a6-63c6-4272-b64e-78ac1ed985b7","Type":"ContainerStarted","Data":"6ab042a3a95e0ec48ac5c9e62f238434043a6950d56b61f1a0a9f7e76559a632"} Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.789212 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" podStartSLOduration=0.789188549 podStartE2EDuration="789.188549ms" podCreationTimestamp="2026-02-26 22:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 22:58:45.787371199 +0000 UTC m=+3810.866861760" watchObservedRunningTime="2026-02-26 22:58:45.789188549 +0000 UTC m=+3810.868679100" Feb 26 22:58:45 crc kubenswrapper[4910]: I0226 22:58:45.924257 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66b31b7-1044-4795-b74f-840971a4ebca" path="/var/lib/kubelet/pods/d66b31b7-1044-4795-b74f-840971a4ebca/volumes" Feb 26 22:58:46 crc kubenswrapper[4910]: I0226 22:58:46.800062 4910 generic.go:334] "Generic (PLEG): container finished" podID="c522e9a6-63c6-4272-b64e-78ac1ed985b7" containerID="7733aeff5528337c757d2863bd995e45bfe286abba754700f6627e8fd0c8cb9e" exitCode=0 Feb 26 22:58:46 crc kubenswrapper[4910]: I0226 22:58:46.800117 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" event={"ID":"c522e9a6-63c6-4272-b64e-78ac1ed985b7","Type":"ContainerDied","Data":"7733aeff5528337c757d2863bd995e45bfe286abba754700f6627e8fd0c8cb9e"} Feb 26 22:58:47 crc kubenswrapper[4910]: I0226 22:58:47.933925 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:47 crc kubenswrapper[4910]: I0226 22:58:47.980694 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bjtlp/crc-debug-kmwtf"] Feb 26 22:58:47 crc kubenswrapper[4910]: I0226 22:58:47.990324 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c522e9a6-63c6-4272-b64e-78ac1ed985b7-host\") pod \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\" (UID: \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\") " Feb 26 22:58:47 crc kubenswrapper[4910]: I0226 22:58:47.990434 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c522e9a6-63c6-4272-b64e-78ac1ed985b7-host" (OuterVolumeSpecName: "host") pod "c522e9a6-63c6-4272-b64e-78ac1ed985b7" (UID: "c522e9a6-63c6-4272-b64e-78ac1ed985b7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:58:47 crc kubenswrapper[4910]: I0226 22:58:47.990592 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kz2v\" (UniqueName: \"kubernetes.io/projected/c522e9a6-63c6-4272-b64e-78ac1ed985b7-kube-api-access-4kz2v\") pod \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\" (UID: \"c522e9a6-63c6-4272-b64e-78ac1ed985b7\") " Feb 26 22:58:47 crc kubenswrapper[4910]: I0226 22:58:47.991318 4910 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c522e9a6-63c6-4272-b64e-78ac1ed985b7-host\") on node \"crc\" DevicePath \"\"" Feb 26 22:58:47 crc kubenswrapper[4910]: I0226 22:58:47.991653 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bjtlp/crc-debug-kmwtf"] Feb 26 22:58:47 crc kubenswrapper[4910]: I0226 22:58:47.997450 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c522e9a6-63c6-4272-b64e-78ac1ed985b7-kube-api-access-4kz2v" (OuterVolumeSpecName: "kube-api-access-4kz2v") pod "c522e9a6-63c6-4272-b64e-78ac1ed985b7" (UID: "c522e9a6-63c6-4272-b64e-78ac1ed985b7"). InnerVolumeSpecName "kube-api-access-4kz2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:58:48 crc kubenswrapper[4910]: I0226 22:58:48.092734 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kz2v\" (UniqueName: \"kubernetes.io/projected/c522e9a6-63c6-4272-b64e-78ac1ed985b7-kube-api-access-4kz2v\") on node \"crc\" DevicePath \"\"" Feb 26 22:58:48 crc kubenswrapper[4910]: I0226 22:58:48.827614 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ab042a3a95e0ec48ac5c9e62f238434043a6950d56b61f1a0a9f7e76559a632" Feb 26 22:58:48 crc kubenswrapper[4910]: I0226 22:58:48.827683 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-kmwtf" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.183395 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bjtlp/crc-debug-vw9pp"] Feb 26 22:58:49 crc kubenswrapper[4910]: E0226 22:58:49.183892 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c522e9a6-63c6-4272-b64e-78ac1ed985b7" containerName="container-00" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.183907 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="c522e9a6-63c6-4272-b64e-78ac1ed985b7" containerName="container-00" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.184500 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="c522e9a6-63c6-4272-b64e-78ac1ed985b7" containerName="container-00" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.185424 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.320201 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b261f943-e1c6-4e89-ab75-0d7cfc984a84-host\") pod \"crc-debug-vw9pp\" (UID: \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\") " pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.320384 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r686c\" (UniqueName: \"kubernetes.io/projected/b261f943-e1c6-4e89-ab75-0d7cfc984a84-kube-api-access-r686c\") pod \"crc-debug-vw9pp\" (UID: \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\") " pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.422562 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b261f943-e1c6-4e89-ab75-0d7cfc984a84-host\") pod \"crc-debug-vw9pp\" (UID: \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\") " pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.422638 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r686c\" (UniqueName: \"kubernetes.io/projected/b261f943-e1c6-4e89-ab75-0d7cfc984a84-kube-api-access-r686c\") pod \"crc-debug-vw9pp\" (UID: \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\") " pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.422675 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b261f943-e1c6-4e89-ab75-0d7cfc984a84-host\") pod \"crc-debug-vw9pp\" (UID: \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\") " pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.439293 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r686c\" (UniqueName: \"kubernetes.io/projected/b261f943-e1c6-4e89-ab75-0d7cfc984a84-kube-api-access-r686c\") pod \"crc-debug-vw9pp\" (UID: \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\") " pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.517213 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:49 crc kubenswrapper[4910]: W0226 22:58:49.546298 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb261f943_e1c6_4e89_ab75_0d7cfc984a84.slice/crio-a1615ac86e4f935c3ccae8a048a8e7efcd9faea645f0252ea31325f7828030f6 WatchSource:0}: Error finding container a1615ac86e4f935c3ccae8a048a8e7efcd9faea645f0252ea31325f7828030f6: Status 404 returned error can't find the container with id a1615ac86e4f935c3ccae8a048a8e7efcd9faea645f0252ea31325f7828030f6 Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.838135 4910 generic.go:334] "Generic (PLEG): container finished" podID="b261f943-e1c6-4e89-ab75-0d7cfc984a84" containerID="d7645a2279bc84d117c3e88692e3c8342aa24b054c1eddef49987f15a09a076d" exitCode=0 Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.838222 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" event={"ID":"b261f943-e1c6-4e89-ab75-0d7cfc984a84","Type":"ContainerDied","Data":"d7645a2279bc84d117c3e88692e3c8342aa24b054c1eddef49987f15a09a076d"} Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.838281 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" event={"ID":"b261f943-e1c6-4e89-ab75-0d7cfc984a84","Type":"ContainerStarted","Data":"a1615ac86e4f935c3ccae8a048a8e7efcd9faea645f0252ea31325f7828030f6"} Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.898476 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bjtlp/crc-debug-vw9pp"] Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.920566 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c522e9a6-63c6-4272-b64e-78ac1ed985b7" path="/var/lib/kubelet/pods/c522e9a6-63c6-4272-b64e-78ac1ed985b7/volumes" Feb 26 22:58:49 crc kubenswrapper[4910]: I0226 22:58:49.921662 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bjtlp/crc-debug-vw9pp"] Feb 26 22:58:50 crc kubenswrapper[4910]: I0226 22:58:50.960099 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:51 crc kubenswrapper[4910]: I0226 22:58:51.095811 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b261f943-e1c6-4e89-ab75-0d7cfc984a84-host\") pod \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\" (UID: \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\") " Feb 26 22:58:51 crc kubenswrapper[4910]: I0226 22:58:51.095962 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b261f943-e1c6-4e89-ab75-0d7cfc984a84-host" (OuterVolumeSpecName: "host") pod "b261f943-e1c6-4e89-ab75-0d7cfc984a84" (UID: "b261f943-e1c6-4e89-ab75-0d7cfc984a84"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 22:58:51 crc kubenswrapper[4910]: I0226 22:58:51.096900 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r686c\" (UniqueName: \"kubernetes.io/projected/b261f943-e1c6-4e89-ab75-0d7cfc984a84-kube-api-access-r686c\") pod \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\" (UID: \"b261f943-e1c6-4e89-ab75-0d7cfc984a84\") " Feb 26 22:58:51 crc kubenswrapper[4910]: I0226 22:58:51.097921 4910 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b261f943-e1c6-4e89-ab75-0d7cfc984a84-host\") on node \"crc\" DevicePath \"\"" Feb 26 22:58:51 crc kubenswrapper[4910]: I0226 22:58:51.104560 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b261f943-e1c6-4e89-ab75-0d7cfc984a84-kube-api-access-r686c" (OuterVolumeSpecName: "kube-api-access-r686c") pod "b261f943-e1c6-4e89-ab75-0d7cfc984a84" (UID: "b261f943-e1c6-4e89-ab75-0d7cfc984a84"). InnerVolumeSpecName "kube-api-access-r686c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 22:58:51 crc kubenswrapper[4910]: I0226 22:58:51.202197 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r686c\" (UniqueName: \"kubernetes.io/projected/b261f943-e1c6-4e89-ab75-0d7cfc984a84-kube-api-access-r686c\") on node \"crc\" DevicePath \"\"" Feb 26 22:58:51 crc kubenswrapper[4910]: I0226 22:58:51.860580 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1615ac86e4f935c3ccae8a048a8e7efcd9faea645f0252ea31325f7828030f6" Feb 26 22:58:51 crc kubenswrapper[4910]: I0226 22:58:51.860648 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/crc-debug-vw9pp" Feb 26 22:58:51 crc kubenswrapper[4910]: I0226 22:58:51.911432 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b261f943-e1c6-4e89-ab75-0d7cfc984a84" path="/var/lib/kubelet/pods/b261f943-e1c6-4e89-ab75-0d7cfc984a84/volumes" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.050312 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_60fb0251-1bd0-4e06-a368-5aceb0afaa87/init-config-reloader/0.log" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.159092 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_60fb0251-1bd0-4e06-a368-5aceb0afaa87/init-config-reloader/0.log" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.164010 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_60fb0251-1bd0-4e06-a368-5aceb0afaa87/alertmanager/0.log" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.250465 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_60fb0251-1bd0-4e06-a368-5aceb0afaa87/config-reloader/0.log" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.364388 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-68496f8578-p9hfm_265c7bd0-7cd6-46dc-a186-b3039ae95224/barbican-api-log/0.log" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.365935 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-68496f8578-p9hfm_265c7bd0-7cd6-46dc-a186-b3039ae95224/barbican-api/0.log" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.505648 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7696b9558b-vr9cd_f7238de5-2d97-4467-b06f-937763173cac/barbican-keystone-listener/0.log" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.818683 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7696b9558b-vr9cd_f7238de5-2d97-4467-b06f-937763173cac/barbican-keystone-listener-log/0.log" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.824621 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-65889bccf5-h979r_752b8add-84e6-43cd-9907-6e190e335fe7/barbican-worker/0.log" Feb 26 22:59:17 crc kubenswrapper[4910]: I0226 22:59:17.856803 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-65889bccf5-h979r_752b8add-84e6-43cd-9907-6e190e335fe7/barbican-worker-log/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.097708 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vdd8m_dbad4d26-7d58-4969-a25f-6b67dc18b9e9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.146500 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_62dbf00f-5cf4-4400-8eb3-f861fadda173/ceilometer-central-agent/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.264303 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_62dbf00f-5cf4-4400-8eb3-f861fadda173/ceilometer-notification-agent/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.285959 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_62dbf00f-5cf4-4400-8eb3-f861fadda173/proxy-httpd/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.332584 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_62dbf00f-5cf4-4400-8eb3-f861fadda173/sg-core/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.460996 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3f15d8ce-24ee-4d25-a0d8-b9c659220644/cinder-api/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.507036 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3f15d8ce-24ee-4d25-a0d8-b9c659220644/cinder-api-log/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.628249 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9d19a84d-6bcc-4eac-b319-cfcad44d541b/cinder-scheduler/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.661776 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9d19a84d-6bcc-4eac-b319-cfcad44d541b/probe/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.858295 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_91580ce8-a4ad-44bd-99b3-0d7b077d0401/cloudkitty-api-log/0.log" Feb 26 22:59:18 crc kubenswrapper[4910]: I0226 22:59:18.860457 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_91580ce8-a4ad-44bd-99b3-0d7b077d0401/cloudkitty-api/0.log" Feb 26 22:59:19 crc kubenswrapper[4910]: I0226 22:59:19.036511 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_b6be9fb7-b7f0-4dc9-9470-b9675918d1d1/loki-compactor/0.log" Feb 26 22:59:19 crc kubenswrapper[4910]: I0226 22:59:19.145115 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-zps74_993d51de-20a2-4cee-856c-f0cbb1b0307d/loki-distributor/0.log" Feb 26 22:59:19 crc kubenswrapper[4910]: I0226 22:59:19.206406 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-ljjvh_cb63b582-88ea-454f-96bc-c676e35dd7f7/gateway/0.log" Feb 26 22:59:19 crc kubenswrapper[4910]: I0226 22:59:19.333742 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-mpzzf_79645824-b55e-43e0-acc2-d0b64e9c7326/gateway/0.log" Feb 26 22:59:19 crc kubenswrapper[4910]: I0226 22:59:19.613357 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_a65c3337-4a1d-4ae8-abf6-862e34f280cb/loki-index-gateway/0.log" Feb 26 22:59:19 crc kubenswrapper[4910]: I0226 22:59:19.630797 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_3e40b05d-8071-4f6b-b2ab-160931200e8a/loki-ingester/0.log" Feb 26 22:59:19 crc kubenswrapper[4910]: I0226 22:59:19.899515 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-5q6jc_50fe0fae-cfef-4a5c-9b5c-0c09065c72ed/loki-query-frontend/0.log" Feb 26 22:59:20 crc kubenswrapper[4910]: I0226 22:59:20.231415 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-5lzsg_961c5cc4-3d52-49fe-be85-9bcdf6e6c4e7/loki-querier/0.log" Feb 26 22:59:20 crc kubenswrapper[4910]: I0226 22:59:20.286117 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8q9vx_f521e05c-2c07-434a-8e61-40ba33038794/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:20 crc kubenswrapper[4910]: I0226 22:59:20.625896 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-4hr8q_b6563cac-bed3-4ce6-a7d3-d6ed8e53a195/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:20 crc kubenswrapper[4910]: I0226 22:59:20.807419 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-hlnkc_401df5b4-c175-4e2d-8174-81ece52943ba/init/0.log" Feb 26 22:59:21 crc kubenswrapper[4910]: I0226 22:59:21.144925 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-hlnkc_401df5b4-c175-4e2d-8174-81ece52943ba/init/0.log" Feb 26 22:59:21 crc kubenswrapper[4910]: I0226 22:59:21.217414 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-hlnkc_401df5b4-c175-4e2d-8174-81ece52943ba/dnsmasq-dns/0.log" Feb 26 22:59:21 crc kubenswrapper[4910]: I0226 22:59:21.351708 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-m5bpt_90647bce-161d-4a56-86bc-662a69916664/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:21 crc kubenswrapper[4910]: I0226 22:59:21.439507 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cd55a983-41e8-4575-a9a4-8a57e93b8816/glance-httpd/0.log" Feb 26 22:59:21 crc kubenswrapper[4910]: I0226 22:59:21.439764 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cd55a983-41e8-4575-a9a4-8a57e93b8816/glance-log/0.log" Feb 26 22:59:21 crc kubenswrapper[4910]: I0226 22:59:21.666449 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a453142a-b867-453b-9ea3-6ad60d61e47c/glance-log/0.log" Feb 26 22:59:21 crc kubenswrapper[4910]: I0226 22:59:21.684919 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a453142a-b867-453b-9ea3-6ad60d61e47c/glance-httpd/0.log" Feb 26 22:59:21 crc kubenswrapper[4910]: I0226 22:59:21.831369 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dvrjq_76c54bfb-1dab-4654-b0d2-dca09154f2b0/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:21 crc kubenswrapper[4910]: I0226 22:59:21.911041 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-q7fhp_f308d423-5b01-4866-9ea0-b48746b243b5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:22 crc kubenswrapper[4910]: I0226 22:59:22.142273 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a97f20f6-39c0-49bb-9f07-559e1d2b5c7f/kube-state-metrics/0.log" Feb 26 22:59:22 crc kubenswrapper[4910]: I0226 22:59:22.445151 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79f8b87c99-7mvnv_c9e24dbd-bccd-4c17-b640-b183d1f296e7/keystone-api/0.log" Feb 26 22:59:22 crc kubenswrapper[4910]: I0226 22:59:22.486095 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ckp4n_7408a6c2-235e-4149-9219-c5f71e983e62/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:22 crc kubenswrapper[4910]: I0226 22:59:22.852731 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74c6855597-x8m7j_b3a1ade6-feae-41ab-97e4-120b9d55cfdf/neutron-httpd/0.log" Feb 26 22:59:22 crc kubenswrapper[4910]: I0226 22:59:22.895577 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74c6855597-x8m7j_b3a1ade6-feae-41ab-97e4-120b9d55cfdf/neutron-api/0.log" Feb 26 22:59:23 crc kubenswrapper[4910]: I0226 22:59:23.114095 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-6n89l_af2ace9c-60af-47a0-992d-a7961e07f840/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:23 crc kubenswrapper[4910]: I0226 22:59:23.627394 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b0b80f69-d40e-460e-b205-ce0125d3b89b/nova-api-log/0.log" Feb 26 22:59:23 crc kubenswrapper[4910]: I0226 22:59:23.898365 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ab1e5b7b-4ac0-4e1d-9a7a-d8312a973e0b/nova-cell0-conductor-conductor/0.log" Feb 26 22:59:23 crc kubenswrapper[4910]: I0226 22:59:23.948408 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b0b80f69-d40e-460e-b205-ce0125d3b89b/nova-api-api/0.log" Feb 26 22:59:24 crc kubenswrapper[4910]: I0226 22:59:24.289270 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_05c4c120-7406-46d0-a5e8-157f624d4d13/nova-cell1-conductor-conductor/0.log" Feb 26 22:59:24 crc kubenswrapper[4910]: I0226 22:59:24.316700 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_26e818e1-e33c-4bfe-b133-36ca523fd741/nova-cell1-novncproxy-novncproxy/0.log" Feb 26 22:59:24 crc kubenswrapper[4910]: I0226 22:59:24.668264 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-xzwzp_42c909bc-0493-4de3-882f-c6ebf8967f27/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:24 crc kubenswrapper[4910]: I0226 22:59:24.765169 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dc3d3d0-abef-4259-9c7b-42726f571be3/nova-metadata-log/0.log" Feb 26 22:59:25 crc kubenswrapper[4910]: I0226 22:59:25.162215 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b08468f8-6dfe-4514-9737-87db33cb927c/nova-scheduler-scheduler/0.log" Feb 26 22:59:25 crc kubenswrapper[4910]: I0226 22:59:25.342233 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25f6d388-f925-4c92-9298-a454dd536aa6/mysql-bootstrap/0.log" Feb 26 22:59:25 crc kubenswrapper[4910]: I0226 22:59:25.474530 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25f6d388-f925-4c92-9298-a454dd536aa6/mysql-bootstrap/0.log" Feb 26 22:59:25 crc kubenswrapper[4910]: I0226 22:59:25.598449 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25f6d388-f925-4c92-9298-a454dd536aa6/galera/0.log" Feb 26 22:59:25 crc kubenswrapper[4910]: I0226 22:59:25.866858 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f3994e44-ac9f-4f93-97cf-9ad02cdc61e6/mysql-bootstrap/0.log" Feb 26 22:59:26 crc kubenswrapper[4910]: I0226 22:59:26.083852 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f3994e44-ac9f-4f93-97cf-9ad02cdc61e6/galera/0.log" Feb 26 22:59:26 crc kubenswrapper[4910]: I0226 22:59:26.086274 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f3994e44-ac9f-4f93-97cf-9ad02cdc61e6/mysql-bootstrap/0.log" Feb 26 22:59:26 crc kubenswrapper[4910]: I0226 22:59:26.296943 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dc3d3d0-abef-4259-9c7b-42726f571be3/nova-metadata-metadata/0.log" Feb 26 22:59:26 crc kubenswrapper[4910]: I0226 22:59:26.325106 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_71b02cae-fc90-4a97-967d-9a539d5ab671/openstackclient/0.log" Feb 26 22:59:26 crc kubenswrapper[4910]: I0226 22:59:26.578437 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ddsmc_334061f5-f54a-41b2-8c49-66695cb3639a/ovn-controller/0.log" Feb 26 22:59:26 crc kubenswrapper[4910]: I0226 22:59:26.761483 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hr48l_1fb71366-0580-4adf-91c1-d0177014808b/openstack-network-exporter/0.log" Feb 26 22:59:26 crc kubenswrapper[4910]: I0226 22:59:26.925031 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6tz7l_3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed/ovsdb-server-init/0.log" Feb 26 22:59:27 crc kubenswrapper[4910]: I0226 22:59:27.306453 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6tz7l_3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed/ovsdb-server-init/0.log" Feb 26 22:59:27 crc kubenswrapper[4910]: I0226 22:59:27.346812 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6tz7l_3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed/ovs-vswitchd/0.log" Feb 26 22:59:27 crc kubenswrapper[4910]: I0226 22:59:27.371559 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6tz7l_3dfe364f-bc7d-42a9-a2a6-19cecdbd93ed/ovsdb-server/0.log" Feb 26 22:59:27 crc kubenswrapper[4910]: I0226 22:59:27.619612 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-x8xs6_4c52b5b0-2d14-4120-bfc4-1b2d73bcb4b3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:27 crc kubenswrapper[4910]: I0226 22:59:27.815204 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ac50df3e-b0e1-432e-9749-95c00d5a6281/ovn-northd/0.log" Feb 26 22:59:27 crc kubenswrapper[4910]: I0226 22:59:27.824777 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ac50df3e-b0e1-432e-9749-95c00d5a6281/openstack-network-exporter/0.log" Feb 26 22:59:27 crc kubenswrapper[4910]: I0226 22:59:27.848729 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_16b7305a-e063-4e97-b224-61fe0116227f/cloudkitty-proc/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.112826 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_17efae9f-593e-4c9f-8803-9090fff6e616/openstack-network-exporter/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.121011 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_17efae9f-593e-4c9f-8803-9090fff6e616/ovsdbserver-nb/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.224902 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_cb62085e-02e8-4670-8ff4-dc1a7b242eb8/openstack-network-exporter/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.333443 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_cb62085e-02e8-4670-8ff4-dc1a7b242eb8/ovsdbserver-sb/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.452783 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c5fdc687b-67zqb_adaa2732-718e-416c-abbb-d344cd1fbbf2/placement-api/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.522905 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c5fdc687b-67zqb_adaa2732-718e-416c-abbb-d344cd1fbbf2/placement-log/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.615670 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5ac37f44-e173-4927-b8ea-44741aa983c0/init-config-reloader/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.789942 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5ac37f44-e173-4927-b8ea-44741aa983c0/config-reloader/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.854513 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5ac37f44-e173-4927-b8ea-44741aa983c0/init-config-reloader/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.879102 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5ac37f44-e173-4927-b8ea-44741aa983c0/thanos-sidecar/0.log" Feb 26 22:59:28 crc kubenswrapper[4910]: I0226 22:59:28.882028 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5ac37f44-e173-4927-b8ea-44741aa983c0/prometheus/0.log" Feb 26 22:59:29 crc kubenswrapper[4910]: I0226 22:59:29.158924 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1267de00-e6b5-4340-b2e4-5614288011dc/setup-container/0.log" Feb 26 22:59:29 crc kubenswrapper[4910]: I0226 22:59:29.385905 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1267de00-e6b5-4340-b2e4-5614288011dc/setup-container/0.log" Feb 26 22:59:29 crc kubenswrapper[4910]: I0226 22:59:29.403991 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d43ea280-40c0-430e-8d12-41a3522f4f29/setup-container/0.log" Feb 26 22:59:29 crc kubenswrapper[4910]: I0226 22:59:29.421216 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1267de00-e6b5-4340-b2e4-5614288011dc/rabbitmq/0.log" Feb 26 22:59:29 crc kubenswrapper[4910]: I0226 22:59:29.569733 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d43ea280-40c0-430e-8d12-41a3522f4f29/setup-container/0.log" Feb 26 22:59:29 crc kubenswrapper[4910]: I0226 22:59:29.637480 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d43ea280-40c0-430e-8d12-41a3522f4f29/rabbitmq/0.log" Feb 26 22:59:29 crc kubenswrapper[4910]: I0226 22:59:29.676671 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-zw8j2_70ef3159-37b2-41f3-bfa6-90a6dcbfd17c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:29 crc kubenswrapper[4910]: I0226 22:59:29.850037 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2cgvf_0cf7ce56-e96f-47ae-ae77-8e32396de8e4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:29 crc kubenswrapper[4910]: I0226 22:59:29.940943 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-d8jfl_34a46ff4-b5ae-4012-bbec-7601cd6a6a5a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:30 crc kubenswrapper[4910]: I0226 22:59:30.049062 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5klzw_29e090ee-08ec-4056-9ae1-6be8b692a15f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:30 crc kubenswrapper[4910]: I0226 22:59:30.264124 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-r4srj_209a88c5-ba73-4b03-a77e-404c240f33ac/ssh-known-hosts-edpm-deployment/0.log" Feb 26 22:59:30 crc kubenswrapper[4910]: I0226 22:59:30.611587 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6fc88b699f-nwbd7_1277dd6e-0e1b-4693-9923-a5915b981d6d/proxy-server/0.log" Feb 26 22:59:30 crc kubenswrapper[4910]: I0226 22:59:30.716282 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-cgwcw_1d4bb0af-a11e-4f9f-a420-fc07f0220b10/swift-ring-rebalance/0.log" Feb 26 22:59:30 crc kubenswrapper[4910]: I0226 22:59:30.727724 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6fc88b699f-nwbd7_1277dd6e-0e1b-4693-9923-a5915b981d6d/proxy-httpd/0.log" Feb 26 22:59:30 crc kubenswrapper[4910]: I0226 22:59:30.850030 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/account-auditor/0.log" Feb 26 22:59:30 crc kubenswrapper[4910]: I0226 22:59:30.946193 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/account-reaper/0.log" Feb 26 22:59:30 crc kubenswrapper[4910]: I0226 22:59:30.988872 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/account-replicator/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.052323 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/account-server/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.148272 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/container-auditor/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.218549 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/container-replicator/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.226021 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/container-server/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.319846 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/container-updater/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.380660 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/object-auditor/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.466685 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/object-replicator/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.475655 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/object-expirer/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.522676 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/object-server/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.682044 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/object-updater/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.702213 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/rsync/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.743355 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_30b027eb-e942-4121-aebc-776d616b902e/swift-recon-cron/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.930902 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-sj95z_975c1c11-dac1-4a07-bd11-3ef32ccf0449/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:31 crc kubenswrapper[4910]: I0226 22:59:31.979420 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_221c5dbb-3674-4268-b698-109e2b97d374/tempest-tests-tempest-tests-runner/0.log" Feb 26 22:59:32 crc kubenswrapper[4910]: I0226 22:59:32.183617 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b70555d9-49c4-455f-8991-996047c0c0a2/test-operator-logs-container/0.log" Feb 26 22:59:32 crc kubenswrapper[4910]: I0226 22:59:32.434878 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-zs82n_0b294c44-bbde-4d8f-bedc-992f4df703e8/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 22:59:35 crc kubenswrapper[4910]: I0226 22:59:35.411230 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9e2f6ce8-eda1-4196-954a-3367ccc66e33/memcached/0.log" Feb 26 22:59:55 crc kubenswrapper[4910]: I0226 22:59:55.727340 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 22:59:55 crc kubenswrapper[4910]: I0226 22:59:55.727958 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.138241 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535780-t5swx"] Feb 26 23:00:00 crc kubenswrapper[4910]: E0226 23:00:00.139274 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b261f943-e1c6-4e89-ab75-0d7cfc984a84" containerName="container-00" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.139289 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="b261f943-e1c6-4e89-ab75-0d7cfc984a84" containerName="container-00" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.139511 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="b261f943-e1c6-4e89-ab75-0d7cfc984a84" containerName="container-00" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.140375 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535780-t5swx" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.143095 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.143467 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.143654 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.147478 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5"] Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.149282 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.150942 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.151357 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.161395 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535780-t5swx"] Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.182597 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5"] Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.207887 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snvdf\" (UniqueName: \"kubernetes.io/projected/ceaedc11-b346-4997-83ad-7b8f5e99c1d2-kube-api-access-snvdf\") pod \"auto-csr-approver-29535780-t5swx\" (UID: \"ceaedc11-b346-4997-83ad-7b8f5e99c1d2\") " pod="openshift-infra/auto-csr-approver-29535780-t5swx" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.207951 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25f81134-d19c-4ec0-b157-baeee8992b68-config-volume\") pod \"collect-profiles-29535780-8pjs5\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.208011 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pcbw\" (UniqueName: \"kubernetes.io/projected/25f81134-d19c-4ec0-b157-baeee8992b68-kube-api-access-6pcbw\") pod \"collect-profiles-29535780-8pjs5\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.208124 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25f81134-d19c-4ec0-b157-baeee8992b68-secret-volume\") pod \"collect-profiles-29535780-8pjs5\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.310537 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25f81134-d19c-4ec0-b157-baeee8992b68-secret-volume\") pod \"collect-profiles-29535780-8pjs5\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.310657 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snvdf\" (UniqueName: \"kubernetes.io/projected/ceaedc11-b346-4997-83ad-7b8f5e99c1d2-kube-api-access-snvdf\") pod \"auto-csr-approver-29535780-t5swx\" (UID: \"ceaedc11-b346-4997-83ad-7b8f5e99c1d2\") " pod="openshift-infra/auto-csr-approver-29535780-t5swx" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.310694 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25f81134-d19c-4ec0-b157-baeee8992b68-config-volume\") pod \"collect-profiles-29535780-8pjs5\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.310752 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pcbw\" (UniqueName: \"kubernetes.io/projected/25f81134-d19c-4ec0-b157-baeee8992b68-kube-api-access-6pcbw\") pod \"collect-profiles-29535780-8pjs5\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.312950 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25f81134-d19c-4ec0-b157-baeee8992b68-config-volume\") pod \"collect-profiles-29535780-8pjs5\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.317605 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25f81134-d19c-4ec0-b157-baeee8992b68-secret-volume\") pod \"collect-profiles-29535780-8pjs5\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.328795 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snvdf\" (UniqueName: \"kubernetes.io/projected/ceaedc11-b346-4997-83ad-7b8f5e99c1d2-kube-api-access-snvdf\") pod \"auto-csr-approver-29535780-t5swx\" (UID: \"ceaedc11-b346-4997-83ad-7b8f5e99c1d2\") " pod="openshift-infra/auto-csr-approver-29535780-t5swx" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.344102 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pcbw\" (UniqueName: \"kubernetes.io/projected/25f81134-d19c-4ec0-b157-baeee8992b68-kube-api-access-6pcbw\") pod \"collect-profiles-29535780-8pjs5\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.461802 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535780-t5swx" Feb 26 23:00:00 crc kubenswrapper[4910]: I0226 23:00:00.474415 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.037657 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535780-t5swx"] Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.165819 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5"] Feb 26 23:00:01 crc kubenswrapper[4910]: W0226 23:00:01.167295 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25f81134_d19c_4ec0_b157_baeee8992b68.slice/crio-1d22fcc656cda42c559720db6599106fc7a5cf98a7e5b069d5923f5789770dfd WatchSource:0}: Error finding container 1d22fcc656cda42c559720db6599106fc7a5cf98a7e5b069d5923f5789770dfd: Status 404 returned error can't find the container with id 1d22fcc656cda42c559720db6599106fc7a5cf98a7e5b069d5923f5789770dfd Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.299021 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf_e9b2f54c-fb64-4558-8e94-a42502a023f4/util/0.log" Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.474473 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf_e9b2f54c-fb64-4558-8e94-a42502a023f4/util/0.log" Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.556646 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf_e9b2f54c-fb64-4558-8e94-a42502a023f4/pull/0.log" Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.562952 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf_e9b2f54c-fb64-4558-8e94-a42502a023f4/pull/0.log" Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.751178 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf_e9b2f54c-fb64-4558-8e94-a42502a023f4/util/0.log" Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.780956 4910 generic.go:334] "Generic (PLEG): container finished" podID="25f81134-d19c-4ec0-b157-baeee8992b68" containerID="364327c44cc393d9d46bb57be6e7dbc63037574bfa87720c7621e569f4ebcce3" exitCode=0 Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.781029 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" event={"ID":"25f81134-d19c-4ec0-b157-baeee8992b68","Type":"ContainerDied","Data":"364327c44cc393d9d46bb57be6e7dbc63037574bfa87720c7621e569f4ebcce3"} Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.781056 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" event={"ID":"25f81134-d19c-4ec0-b157-baeee8992b68","Type":"ContainerStarted","Data":"1d22fcc656cda42c559720db6599106fc7a5cf98a7e5b069d5923f5789770dfd"} Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.782640 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535780-t5swx" event={"ID":"ceaedc11-b346-4997-83ad-7b8f5e99c1d2","Type":"ContainerStarted","Data":"e4d08563903c3a131b9e6342eeb336d344d79f1bdc615cff0470da8e5069aea4"} Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.785328 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf_e9b2f54c-fb64-4558-8e94-a42502a023f4/extract/0.log" Feb 26 23:00:01 crc kubenswrapper[4910]: I0226 23:00:01.841181 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2045d9c2d149b742e48f276cd1b608759cea9b21f2c21d1c5a056f96ddzqqzf_e9b2f54c-fb64-4558-8e94-a42502a023f4/pull/0.log" Feb 26 23:00:02 crc kubenswrapper[4910]: I0226 23:00:02.204010 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-t4bqx_97af21d2-e5ce-468b-bbf9-0e663577a30b/manager/0.log" Feb 26 23:00:02 crc kubenswrapper[4910]: I0226 23:00:02.571922 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-l7m5c_71e40b76-d83e-41f5-a184-5d062a8291e4/manager/0.log" Feb 26 23:00:02 crc kubenswrapper[4910]: I0226 23:00:02.825878 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-mqd2x_1f356802-8833-4a65-8e65-c9bab59c1080/manager/0.log" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.047710 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-lkjvb_19d23d2a-dce7-45c4-9cbd-ae14e8205aa7/manager/0.log" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.383269 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.477918 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25f81134-d19c-4ec0-b157-baeee8992b68-config-volume\") pod \"25f81134-d19c-4ec0-b157-baeee8992b68\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.477974 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pcbw\" (UniqueName: \"kubernetes.io/projected/25f81134-d19c-4ec0-b157-baeee8992b68-kube-api-access-6pcbw\") pod \"25f81134-d19c-4ec0-b157-baeee8992b68\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.478242 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25f81134-d19c-4ec0-b157-baeee8992b68-secret-volume\") pod \"25f81134-d19c-4ec0-b157-baeee8992b68\" (UID: \"25f81134-d19c-4ec0-b157-baeee8992b68\") " Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.478557 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f81134-d19c-4ec0-b157-baeee8992b68-config-volume" (OuterVolumeSpecName: "config-volume") pod "25f81134-d19c-4ec0-b157-baeee8992b68" (UID: "25f81134-d19c-4ec0-b157-baeee8992b68"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.478720 4910 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25f81134-d19c-4ec0-b157-baeee8992b68-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.485458 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f81134-d19c-4ec0-b157-baeee8992b68-kube-api-access-6pcbw" (OuterVolumeSpecName: "kube-api-access-6pcbw") pod "25f81134-d19c-4ec0-b157-baeee8992b68" (UID: "25f81134-d19c-4ec0-b157-baeee8992b68"). InnerVolumeSpecName "kube-api-access-6pcbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.486022 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25f81134-d19c-4ec0-b157-baeee8992b68-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "25f81134-d19c-4ec0-b157-baeee8992b68" (UID: "25f81134-d19c-4ec0-b157-baeee8992b68"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.580240 4910 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25f81134-d19c-4ec0-b157-baeee8992b68-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.580275 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pcbw\" (UniqueName: \"kubernetes.io/projected/25f81134-d19c-4ec0-b157-baeee8992b68-kube-api-access-6pcbw\") on node \"crc\" DevicePath \"\"" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.773113 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-dt9gd_1d3bc056-7a65-4188-b408-66892dbc6c86/manager/0.log" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.810893 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" event={"ID":"25f81134-d19c-4ec0-b157-baeee8992b68","Type":"ContainerDied","Data":"1d22fcc656cda42c559720db6599106fc7a5cf98a7e5b069d5923f5789770dfd"} Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.810948 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d22fcc656cda42c559720db6599106fc7a5cf98a7e5b069d5923f5789770dfd" Feb 26 23:00:03 crc kubenswrapper[4910]: I0226 23:00:03.811051 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535780-8pjs5" Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:03.944672 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-7n5lg_1326400f-df88-407f-807c-05182d879101/manager/0.log" Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.000926 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-j9hll_baa339f1-af25-4789-899c-b6ffed7c4ac0/manager/0.log" Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.242037 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-dkwl5_1e643756-1a6a-4654-af77-5b9d0f1433f2/manager/0.log" Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.278539 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-v9ll5_ee918cc9-f4d9-49b1-9d9e-1d37c4aa7946/manager/0.log" Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.462219 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw"] Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.475881 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535735-6z7zw"] Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.577635 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-smv6m_aff3f8d4-51e9-4557-bc9d-497d587f667a/manager/0.log" Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.778894 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-csc7d_77b8f8f5-e1e3-4d68-80a1-fff99d000d3a/manager/0.log" Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.827021 4910 generic.go:334] "Generic (PLEG): container finished" podID="ceaedc11-b346-4997-83ad-7b8f5e99c1d2" containerID="68118571a07131556745b8126880025677360a99bce4143ed48416fc56811982" exitCode=0 Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.827062 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535780-t5swx" event={"ID":"ceaedc11-b346-4997-83ad-7b8f5e99c1d2","Type":"ContainerDied","Data":"68118571a07131556745b8126880025677360a99bce4143ed48416fc56811982"} Feb 26 23:00:04 crc kubenswrapper[4910]: I0226 23:00:04.977120 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-24d89_05b1662b-98cb-4867-9cf1-4272c173cf1f/manager/0.log" Feb 26 23:00:05 crc kubenswrapper[4910]: I0226 23:00:05.146012 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-r6zct_d2d30287-5f0f-45fd-ae7f-23614ffab2fc/manager/0.log" Feb 26 23:00:05 crc kubenswrapper[4910]: I0226 23:00:05.280916 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cn7hn2_12f2404a-45bb-416e-b4d4-da70f869fbbf/manager/0.log" Feb 26 23:00:05 crc kubenswrapper[4910]: I0226 23:00:05.596713 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6644c86975-7wcgz_4d146c5d-99f9-4731-a825-620f150b91e5/operator/0.log" Feb 26 23:00:05 crc kubenswrapper[4910]: I0226 23:00:05.922137 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e84cab4-c7ab-4e27-92f7-d908fca3c538" path="/var/lib/kubelet/pods/8e84cab4-c7ab-4e27-92f7-d908fca3c538/volumes" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.027865 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2wxf5_a9343243-68ce-4316-bac0-563847d32436/registry-server/0.log" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.348671 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535780-t5swx" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.367572 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-vsr8s_9a4baffc-2491-42c4-838b-1ef90d643817/manager/0.log" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.368896 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-czqwc_f93e1099-5db6-45f0-a344-5d05183572d1/manager/0.log" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.473356 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snvdf\" (UniqueName: \"kubernetes.io/projected/ceaedc11-b346-4997-83ad-7b8f5e99c1d2-kube-api-access-snvdf\") pod \"ceaedc11-b346-4997-83ad-7b8f5e99c1d2\" (UID: \"ceaedc11-b346-4997-83ad-7b8f5e99c1d2\") " Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.479477 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceaedc11-b346-4997-83ad-7b8f5e99c1d2-kube-api-access-snvdf" (OuterVolumeSpecName: "kube-api-access-snvdf") pod "ceaedc11-b346-4997-83ad-7b8f5e99c1d2" (UID: "ceaedc11-b346-4997-83ad-7b8f5e99c1d2"). InnerVolumeSpecName "kube-api-access-snvdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.575415 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snvdf\" (UniqueName: \"kubernetes.io/projected/ceaedc11-b346-4997-83ad-7b8f5e99c1d2-kube-api-access-snvdf\") on node \"crc\" DevicePath \"\"" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.593593 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-b7vkl_f3c3500f-cc15-4b62-b13f-b99aeb97a413/operator/0.log" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.782732 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6pf2w_3d9f959c-b8a6-415a-adf5-a20b0fbc3511/manager/0.log" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.846918 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535780-t5swx" event={"ID":"ceaedc11-b346-4997-83ad-7b8f5e99c1d2","Type":"ContainerDied","Data":"e4d08563903c3a131b9e6342eeb336d344d79f1bdc615cff0470da8e5069aea4"} Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.846955 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4d08563903c3a131b9e6342eeb336d344d79f1bdc615cff0470da8e5069aea4" Feb 26 23:00:06 crc kubenswrapper[4910]: I0226 23:00:06.847028 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535780-t5swx" Feb 26 23:00:07 crc kubenswrapper[4910]: I0226 23:00:07.269249 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ffc89494-n87vn_bfeb8151-7f06-4d68-a3a2-d4c267563b43/manager/0.log" Feb 26 23:00:07 crc kubenswrapper[4910]: I0226 23:00:07.413494 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535774-585cs"] Feb 26 23:00:07 crc kubenswrapper[4910]: I0226 23:00:07.430050 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535774-585cs"] Feb 26 23:00:07 crc kubenswrapper[4910]: I0226 23:00:07.457756 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85bcd67d77-jrqfq_4581d31c-adac-40f4-80ec-53142bc04c02/manager/0.log" Feb 26 23:00:07 crc kubenswrapper[4910]: I0226 23:00:07.886948 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-4q6xr_9fad2a9b-74b5-4bbf-a031-949aef704413/manager/0.log" Feb 26 23:00:07 crc kubenswrapper[4910]: I0226 23:00:07.921241 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c3b6d3-02af-4746-8a7e-5881a9ce595e" path="/var/lib/kubelet/pods/15c3b6d3-02af-4746-8a7e-5881a9ce595e/volumes" Feb 26 23:00:07 crc kubenswrapper[4910]: I0226 23:00:07.979720 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-jjxzk_3b1b5fc5-da86-41ae-996e-9273627c5e62/manager/0.log" Feb 26 23:00:09 crc kubenswrapper[4910]: I0226 23:00:09.467973 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-nrbns_6a6c3a70-66b0-4b20-b4cd-fa1d8fbc228e/manager/0.log" Feb 26 23:00:25 crc kubenswrapper[4910]: I0226 23:00:25.727673 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 23:00:25 crc kubenswrapper[4910]: I0226 23:00:25.728641 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 23:00:30 crc kubenswrapper[4910]: I0226 23:00:30.125553 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hl7fn_9321ff73-5107-4139-ad6f-622b13de5cd1/control-plane-machine-set-operator/0.log" Feb 26 23:00:30 crc kubenswrapper[4910]: I0226 23:00:30.293761 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nxzt6_1bb89394-7073-4408-a891-f4a6eb44eaa7/kube-rbac-proxy/0.log" Feb 26 23:00:30 crc kubenswrapper[4910]: I0226 23:00:30.327404 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nxzt6_1bb89394-7073-4408-a891-f4a6eb44eaa7/machine-api-operator/0.log" Feb 26 23:00:40 crc kubenswrapper[4910]: I0226 23:00:40.804755 4910 scope.go:117] "RemoveContainer" containerID="e934e2987550b4a3a8eb155b5161985be24f9607a4b3c5f3a3c67e56f8da9634" Feb 26 23:00:40 crc kubenswrapper[4910]: I0226 23:00:40.833134 4910 scope.go:117] "RemoveContainer" containerID="c8d3b791e760d99a13b1782e5ef512ce7700ad8eaa2cfa81808f32472d346d2f" Feb 26 23:00:44 crc kubenswrapper[4910]: I0226 23:00:44.441938 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-p8rdf_be6cb3a9-88a5-49cc-9843-782d9641b5fe/cert-manager-controller/0.log" Feb 26 23:00:44 crc kubenswrapper[4910]: I0226 23:00:44.562461 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xnwqc_d9ce7039-0688-4b29-9484-5066034d4d02/cert-manager-cainjector/0.log" Feb 26 23:00:44 crc kubenswrapper[4910]: I0226 23:00:44.676919 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-s5lhf_cc433bc4-06d0-4126-8d67-f8caaf596d86/cert-manager-webhook/0.log" Feb 26 23:00:55 crc kubenswrapper[4910]: I0226 23:00:55.727533 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 23:00:55 crc kubenswrapper[4910]: I0226 23:00:55.728025 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 23:00:55 crc kubenswrapper[4910]: I0226 23:00:55.728067 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 23:00:55 crc kubenswrapper[4910]: I0226 23:00:55.728727 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 23:00:55 crc kubenswrapper[4910]: I0226 23:00:55.728773 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" gracePeriod=600 Feb 26 23:00:55 crc kubenswrapper[4910]: E0226 23:00:55.853499 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:00:56 crc kubenswrapper[4910]: I0226 23:00:56.404209 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" exitCode=0 Feb 26 23:00:56 crc kubenswrapper[4910]: I0226 23:00:56.404258 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f"} Feb 26 23:00:56 crc kubenswrapper[4910]: I0226 23:00:56.404292 4910 scope.go:117] "RemoveContainer" containerID="71354e58e453bb98bc8e73f6f274dd8d77953aba228bbccd64f152573ebcdcb1" Feb 26 23:00:56 crc kubenswrapper[4910]: I0226 23:00:56.405047 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:00:56 crc kubenswrapper[4910]: E0226 23:00:56.405472 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:00:58 crc kubenswrapper[4910]: I0226 23:00:58.214263 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-5z45x_4ebece5c-93ed-4dc8-bbaf-53682b6f95a5/nmstate-console-plugin/0.log" Feb 26 23:00:58 crc kubenswrapper[4910]: I0226 23:00:58.469673 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7c5lt_da33d221-fd03-423a-a7ac-6f74130cb62a/nmstate-handler/0.log" Feb 26 23:00:58 crc kubenswrapper[4910]: I0226 23:00:58.594101 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-hjdg5_990d9775-20ab-4a75-b035-14be139e1c86/nmstate-metrics/0.log" Feb 26 23:00:58 crc kubenswrapper[4910]: I0226 23:00:58.594788 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-hjdg5_990d9775-20ab-4a75-b035-14be139e1c86/kube-rbac-proxy/0.log" Feb 26 23:00:58 crc kubenswrapper[4910]: I0226 23:00:58.698627 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-rxpwx_bb1bab0a-7f00-4ce2-9c96-5a5581cf3b89/nmstate-operator/0.log" Feb 26 23:00:58 crc kubenswrapper[4910]: I0226 23:00:58.759718 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-g57p5_29f7e558-2f35-4fbe-b29a-c1f04082478c/nmstate-webhook/0.log" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.164760 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29535781-mjn2f"] Feb 26 23:01:00 crc kubenswrapper[4910]: E0226 23:01:00.165778 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceaedc11-b346-4997-83ad-7b8f5e99c1d2" containerName="oc" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.165810 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceaedc11-b346-4997-83ad-7b8f5e99c1d2" containerName="oc" Feb 26 23:01:00 crc kubenswrapper[4910]: E0226 23:01:00.165843 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f81134-d19c-4ec0-b157-baeee8992b68" containerName="collect-profiles" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.165857 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f81134-d19c-4ec0-b157-baeee8992b68" containerName="collect-profiles" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.166213 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f81134-d19c-4ec0-b157-baeee8992b68" containerName="collect-profiles" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.166256 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceaedc11-b346-4997-83ad-7b8f5e99c1d2" containerName="oc" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.167323 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.199868 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535781-mjn2f"] Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.311856 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-config-data\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.311973 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-combined-ca-bundle\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.312059 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmt8\" (UniqueName: \"kubernetes.io/projected/e54264e1-642a-4cf4-bb34-a141fb8d12d9-kube-api-access-9tmt8\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.312085 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-fernet-keys\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.413728 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmt8\" (UniqueName: \"kubernetes.io/projected/e54264e1-642a-4cf4-bb34-a141fb8d12d9-kube-api-access-9tmt8\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.413786 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-fernet-keys\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.413870 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-config-data\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.413924 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-combined-ca-bundle\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.419799 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-combined-ca-bundle\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.419994 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-fernet-keys\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.426374 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-config-data\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.432154 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmt8\" (UniqueName: \"kubernetes.io/projected/e54264e1-642a-4cf4-bb34-a141fb8d12d9-kube-api-access-9tmt8\") pod \"keystone-cron-29535781-mjn2f\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.494882 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:00 crc kubenswrapper[4910]: I0226 23:01:00.977861 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535781-mjn2f"] Feb 26 23:01:01 crc kubenswrapper[4910]: I0226 23:01:01.462700 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535781-mjn2f" event={"ID":"e54264e1-642a-4cf4-bb34-a141fb8d12d9","Type":"ContainerStarted","Data":"27020e5314e4de90b81fbdd8a9521bdf7500d9370198f6b7b9e9a48f8927210b"} Feb 26 23:01:01 crc kubenswrapper[4910]: I0226 23:01:01.463091 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535781-mjn2f" event={"ID":"e54264e1-642a-4cf4-bb34-a141fb8d12d9","Type":"ContainerStarted","Data":"cdb67236833971d399bec134be18d3d47379d4d56a133b56c6df9aafe0493465"} Feb 26 23:01:01 crc kubenswrapper[4910]: I0226 23:01:01.493317 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29535781-mjn2f" podStartSLOduration=1.493301786 podStartE2EDuration="1.493301786s" podCreationTimestamp="2026-02-26 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 23:01:01.489939256 +0000 UTC m=+3946.569429807" watchObservedRunningTime="2026-02-26 23:01:01.493301786 +0000 UTC m=+3946.572792327" Feb 26 23:01:05 crc kubenswrapper[4910]: I0226 23:01:05.513862 4910 generic.go:334] "Generic (PLEG): container finished" podID="e54264e1-642a-4cf4-bb34-a141fb8d12d9" containerID="27020e5314e4de90b81fbdd8a9521bdf7500d9370198f6b7b9e9a48f8927210b" exitCode=0 Feb 26 23:01:05 crc kubenswrapper[4910]: I0226 23:01:05.513986 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535781-mjn2f" event={"ID":"e54264e1-642a-4cf4-bb34-a141fb8d12d9","Type":"ContainerDied","Data":"27020e5314e4de90b81fbdd8a9521bdf7500d9370198f6b7b9e9a48f8927210b"} Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.005588 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.064140 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tmt8\" (UniqueName: \"kubernetes.io/projected/e54264e1-642a-4cf4-bb34-a141fb8d12d9-kube-api-access-9tmt8\") pod \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.064247 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-config-data\") pod \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.064441 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-combined-ca-bundle\") pod \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.064514 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-fernet-keys\") pod \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\" (UID: \"e54264e1-642a-4cf4-bb34-a141fb8d12d9\") " Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.070335 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e54264e1-642a-4cf4-bb34-a141fb8d12d9" (UID: "e54264e1-642a-4cf4-bb34-a141fb8d12d9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.070627 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54264e1-642a-4cf4-bb34-a141fb8d12d9-kube-api-access-9tmt8" (OuterVolumeSpecName: "kube-api-access-9tmt8") pod "e54264e1-642a-4cf4-bb34-a141fb8d12d9" (UID: "e54264e1-642a-4cf4-bb34-a141fb8d12d9"). InnerVolumeSpecName "kube-api-access-9tmt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.106290 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e54264e1-642a-4cf4-bb34-a141fb8d12d9" (UID: "e54264e1-642a-4cf4-bb34-a141fb8d12d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.128284 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-config-data" (OuterVolumeSpecName: "config-data") pod "e54264e1-642a-4cf4-bb34-a141fb8d12d9" (UID: "e54264e1-642a-4cf4-bb34-a141fb8d12d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.167185 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tmt8\" (UniqueName: \"kubernetes.io/projected/e54264e1-642a-4cf4-bb34-a141fb8d12d9-kube-api-access-9tmt8\") on node \"crc\" DevicePath \"\"" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.167228 4910 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.167239 4910 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.167247 4910 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e54264e1-642a-4cf4-bb34-a141fb8d12d9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.535455 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535781-mjn2f" event={"ID":"e54264e1-642a-4cf4-bb34-a141fb8d12d9","Type":"ContainerDied","Data":"cdb67236833971d399bec134be18d3d47379d4d56a133b56c6df9aafe0493465"} Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.535513 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdb67236833971d399bec134be18d3d47379d4d56a133b56c6df9aafe0493465" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.535518 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535781-mjn2f" Feb 26 23:01:07 crc kubenswrapper[4910]: E0226 23:01:07.780980 4910 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode54264e1_642a_4cf4_bb34_a141fb8d12d9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode54264e1_642a_4cf4_bb34_a141fb8d12d9.slice/crio-cdb67236833971d399bec134be18d3d47379d4d56a133b56c6df9aafe0493465\": RecentStats: unable to find data in memory cache]" Feb 26 23:01:07 crc kubenswrapper[4910]: I0226 23:01:07.906819 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:01:07 crc kubenswrapper[4910]: E0226 23:01:07.907431 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.599676 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2m4vx"] Feb 26 23:01:08 crc kubenswrapper[4910]: E0226 23:01:08.600351 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54264e1-642a-4cf4-bb34-a141fb8d12d9" containerName="keystone-cron" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.600371 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54264e1-642a-4cf4-bb34-a141fb8d12d9" containerName="keystone-cron" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.600615 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54264e1-642a-4cf4-bb34-a141fb8d12d9" containerName="keystone-cron" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.602578 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.629313 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2m4vx"] Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.697446 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-catalog-content\") pod \"community-operators-2m4vx\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.697881 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-utilities\") pod \"community-operators-2m4vx\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.697979 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldnqz\" (UniqueName: \"kubernetes.io/projected/591ab073-2dac-428c-9c24-6cdf01e5d52d-kube-api-access-ldnqz\") pod \"community-operators-2m4vx\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.800561 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-utilities\") pod \"community-operators-2m4vx\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.800638 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldnqz\" (UniqueName: \"kubernetes.io/projected/591ab073-2dac-428c-9c24-6cdf01e5d52d-kube-api-access-ldnqz\") pod \"community-operators-2m4vx\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.800748 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-catalog-content\") pod \"community-operators-2m4vx\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.801089 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-utilities\") pod \"community-operators-2m4vx\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.801516 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-catalog-content\") pod \"community-operators-2m4vx\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.817630 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldnqz\" (UniqueName: \"kubernetes.io/projected/591ab073-2dac-428c-9c24-6cdf01e5d52d-kube-api-access-ldnqz\") pod \"community-operators-2m4vx\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:08 crc kubenswrapper[4910]: I0226 23:01:08.932545 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:09 crc kubenswrapper[4910]: I0226 23:01:09.400263 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2m4vx"] Feb 26 23:01:09 crc kubenswrapper[4910]: I0226 23:01:09.556560 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m4vx" event={"ID":"591ab073-2dac-428c-9c24-6cdf01e5d52d","Type":"ContainerStarted","Data":"01b269c8390644b2f8dcd19c72e221c6ccac022b62e26c168515d9121a5995d7"} Feb 26 23:01:10 crc kubenswrapper[4910]: I0226 23:01:10.569630 4910 generic.go:334] "Generic (PLEG): container finished" podID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerID="632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc" exitCode=0 Feb 26 23:01:10 crc kubenswrapper[4910]: I0226 23:01:10.569686 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m4vx" event={"ID":"591ab073-2dac-428c-9c24-6cdf01e5d52d","Type":"ContainerDied","Data":"632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc"} Feb 26 23:01:10 crc kubenswrapper[4910]: I0226 23:01:10.573493 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 23:01:12 crc kubenswrapper[4910]: I0226 23:01:12.589498 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m4vx" event={"ID":"591ab073-2dac-428c-9c24-6cdf01e5d52d","Type":"ContainerStarted","Data":"931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da"} Feb 26 23:01:12 crc kubenswrapper[4910]: I0226 23:01:12.776805 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6f8f9c794b-f7dsr_31762fdd-32c2-4dd0-b121-814205d69874/kube-rbac-proxy/0.log" Feb 26 23:01:12 crc kubenswrapper[4910]: I0226 23:01:12.818566 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6f8f9c794b-f7dsr_31762fdd-32c2-4dd0-b121-814205d69874/manager/0.log" Feb 26 23:01:13 crc kubenswrapper[4910]: I0226 23:01:13.603209 4910 generic.go:334] "Generic (PLEG): container finished" podID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerID="931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da" exitCode=0 Feb 26 23:01:13 crc kubenswrapper[4910]: I0226 23:01:13.603379 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m4vx" event={"ID":"591ab073-2dac-428c-9c24-6cdf01e5d52d","Type":"ContainerDied","Data":"931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da"} Feb 26 23:01:14 crc kubenswrapper[4910]: I0226 23:01:14.618405 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m4vx" event={"ID":"591ab073-2dac-428c-9c24-6cdf01e5d52d","Type":"ContainerStarted","Data":"d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6"} Feb 26 23:01:14 crc kubenswrapper[4910]: I0226 23:01:14.637969 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2m4vx" podStartSLOduration=3.207005891 podStartE2EDuration="6.637947368s" podCreationTimestamp="2026-02-26 23:01:08 +0000 UTC" firstStartedPulling="2026-02-26 23:01:10.573295069 +0000 UTC m=+3955.652785610" lastFinishedPulling="2026-02-26 23:01:14.004236526 +0000 UTC m=+3959.083727087" observedRunningTime="2026-02-26 23:01:14.63615721 +0000 UTC m=+3959.715647791" watchObservedRunningTime="2026-02-26 23:01:14.637947368 +0000 UTC m=+3959.717437929" Feb 26 23:01:18 crc kubenswrapper[4910]: I0226 23:01:18.901842 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:01:18 crc kubenswrapper[4910]: E0226 23:01:18.902654 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:01:18 crc kubenswrapper[4910]: I0226 23:01:18.933039 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:18 crc kubenswrapper[4910]: I0226 23:01:18.933242 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:19 crc kubenswrapper[4910]: I0226 23:01:19.000597 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:19 crc kubenswrapper[4910]: I0226 23:01:19.718618 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:19 crc kubenswrapper[4910]: I0226 23:01:19.779408 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2m4vx"] Feb 26 23:01:21 crc kubenswrapper[4910]: I0226 23:01:21.688653 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2m4vx" podUID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerName="registry-server" containerID="cri-o://d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6" gracePeriod=2 Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.351627 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.408076 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-catalog-content\") pod \"591ab073-2dac-428c-9c24-6cdf01e5d52d\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.408347 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldnqz\" (UniqueName: \"kubernetes.io/projected/591ab073-2dac-428c-9c24-6cdf01e5d52d-kube-api-access-ldnqz\") pod \"591ab073-2dac-428c-9c24-6cdf01e5d52d\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.408382 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-utilities\") pod \"591ab073-2dac-428c-9c24-6cdf01e5d52d\" (UID: \"591ab073-2dac-428c-9c24-6cdf01e5d52d\") " Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.417055 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-utilities" (OuterVolumeSpecName: "utilities") pod "591ab073-2dac-428c-9c24-6cdf01e5d52d" (UID: "591ab073-2dac-428c-9c24-6cdf01e5d52d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.417790 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591ab073-2dac-428c-9c24-6cdf01e5d52d-kube-api-access-ldnqz" (OuterVolumeSpecName: "kube-api-access-ldnqz") pod "591ab073-2dac-428c-9c24-6cdf01e5d52d" (UID: "591ab073-2dac-428c-9c24-6cdf01e5d52d"). InnerVolumeSpecName "kube-api-access-ldnqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.466884 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "591ab073-2dac-428c-9c24-6cdf01e5d52d" (UID: "591ab073-2dac-428c-9c24-6cdf01e5d52d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.511004 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.511047 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldnqz\" (UniqueName: \"kubernetes.io/projected/591ab073-2dac-428c-9c24-6cdf01e5d52d-kube-api-access-ldnqz\") on node \"crc\" DevicePath \"\"" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.511060 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591ab073-2dac-428c-9c24-6cdf01e5d52d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.705381 4910 generic.go:334] "Generic (PLEG): container finished" podID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerID="d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6" exitCode=0 Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.705480 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2m4vx" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.705485 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m4vx" event={"ID":"591ab073-2dac-428c-9c24-6cdf01e5d52d","Type":"ContainerDied","Data":"d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6"} Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.705934 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m4vx" event={"ID":"591ab073-2dac-428c-9c24-6cdf01e5d52d","Type":"ContainerDied","Data":"01b269c8390644b2f8dcd19c72e221c6ccac022b62e26c168515d9121a5995d7"} Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.705970 4910 scope.go:117] "RemoveContainer" containerID="d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.737881 4910 scope.go:117] "RemoveContainer" containerID="931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.764656 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2m4vx"] Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.777280 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2m4vx"] Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.777492 4910 scope.go:117] "RemoveContainer" containerID="632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.829872 4910 scope.go:117] "RemoveContainer" containerID="d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6" Feb 26 23:01:22 crc kubenswrapper[4910]: E0226 23:01:22.830352 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6\": container with ID starting with d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6 not found: ID does not exist" containerID="d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.830392 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6"} err="failed to get container status \"d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6\": rpc error: code = NotFound desc = could not find container \"d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6\": container with ID starting with d162641e31702fe6a91fe81797debf6ae9c976db7e352a9f11a24d02da36d2f6 not found: ID does not exist" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.830418 4910 scope.go:117] "RemoveContainer" containerID="931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da" Feb 26 23:01:22 crc kubenswrapper[4910]: E0226 23:01:22.830881 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da\": container with ID starting with 931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da not found: ID does not exist" containerID="931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.830936 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da"} err="failed to get container status \"931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da\": rpc error: code = NotFound desc = could not find container \"931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da\": container with ID starting with 931d26f96358e651a4e34e10b9efcff6eaf462b3369a8544c4b82a9db229f7da not found: ID does not exist" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.830984 4910 scope.go:117] "RemoveContainer" containerID="632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc" Feb 26 23:01:22 crc kubenswrapper[4910]: E0226 23:01:22.831334 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc\": container with ID starting with 632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc not found: ID does not exist" containerID="632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc" Feb 26 23:01:22 crc kubenswrapper[4910]: I0226 23:01:22.831365 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc"} err="failed to get container status \"632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc\": rpc error: code = NotFound desc = could not find container \"632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc\": container with ID starting with 632a98efec4683be41a4038d1aeb40ee2d7da42dc3acc69857f4a7099f19aebc not found: ID does not exist" Feb 26 23:01:23 crc kubenswrapper[4910]: I0226 23:01:23.932097 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591ab073-2dac-428c-9c24-6cdf01e5d52d" path="/var/lib/kubelet/pods/591ab073-2dac-428c-9c24-6cdf01e5d52d/volumes" Feb 26 23:01:27 crc kubenswrapper[4910]: I0226 23:01:27.206737 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qxq9v_0bbb4449-bb9e-4d59-9b01-10b3180055c0/prometheus-operator/0.log" Feb 26 23:01:27 crc kubenswrapper[4910]: I0226 23:01:27.411576 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_e29777de-7955-4e02-88fd-51c42f732421/prometheus-operator-admission-webhook/0.log" Feb 26 23:01:27 crc kubenswrapper[4910]: I0226 23:01:27.450007 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_451b4f8d-8570-479f-bb0e-ddbb695bf345/prometheus-operator-admission-webhook/0.log" Feb 26 23:01:27 crc kubenswrapper[4910]: I0226 23:01:27.604058 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-z5vhp_99ab363a-bae9-4e7d-9b11-668cbde4a8d3/operator/0.log" Feb 26 23:01:27 crc kubenswrapper[4910]: I0226 23:01:27.630727 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gv5qh_c4e1f736-965a-4540-b006-4138cb8f08ad/perses-operator/0.log" Feb 26 23:01:33 crc kubenswrapper[4910]: I0226 23:01:33.902839 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:01:33 crc kubenswrapper[4910]: E0226 23:01:33.903827 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.076962 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-bvlbc_8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1/controller/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.236672 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-bvlbc_8b4cb88b-545d-40a3-98e2-1ea2a46a7dc1/kube-rbac-proxy/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.481507 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-frr-files/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.560124 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-frr-files/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.611019 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-reloader/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.655033 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-reloader/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.656950 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-metrics/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.844784 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-metrics/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.878057 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-reloader/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.882799 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-frr-files/0.log" Feb 26 23:01:44 crc kubenswrapper[4910]: I0226 23:01:44.900864 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-metrics/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.069673 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-reloader/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.070784 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-metrics/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.097467 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/cp-frr-files/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.119569 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/controller/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.313580 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/kube-rbac-proxy-frr/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.319888 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/frr-metrics/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.324449 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/kube-rbac-proxy/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.544563 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/reloader/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.618584 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-f57cb_75f0b1e1-c3be-4785-8c4f-fc063d622444/frr-k8s-webhook-server/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.876851 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67dbfc649f-tk2gp_825755dc-b49b-4b20-b77e-02b0262bf8a6/manager/0.log" Feb 26 23:01:45 crc kubenswrapper[4910]: I0226 23:01:45.971124 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-574d8f9b84-4k7bn_e6fdcf51-21de-4c93-9730-a2eadb1dee56/webhook-server/0.log" Feb 26 23:01:46 crc kubenswrapper[4910]: I0226 23:01:46.107580 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-24r95_957ecb93-f3f6-4860-bea4-5977bf0ff619/kube-rbac-proxy/0.log" Feb 26 23:01:46 crc kubenswrapper[4910]: I0226 23:01:46.819322 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l59qt_a63789b4-9f3a-4ee0-ab34-8f79337060e2/frr/0.log" Feb 26 23:01:46 crc kubenswrapper[4910]: I0226 23:01:46.851384 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-24r95_957ecb93-f3f6-4860-bea4-5977bf0ff619/speaker/0.log" Feb 26 23:01:47 crc kubenswrapper[4910]: I0226 23:01:47.901858 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:01:47 crc kubenswrapper[4910]: E0226 23:01:47.902148 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.138594 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535782-f82dq"] Feb 26 23:02:00 crc kubenswrapper[4910]: E0226 23:02:00.139616 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerName="registry-server" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.139632 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerName="registry-server" Feb 26 23:02:00 crc kubenswrapper[4910]: E0226 23:02:00.139645 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerName="extract-content" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.139653 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerName="extract-content" Feb 26 23:02:00 crc kubenswrapper[4910]: E0226 23:02:00.139669 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerName="extract-utilities" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.139677 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerName="extract-utilities" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.139940 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="591ab073-2dac-428c-9c24-6cdf01e5d52d" containerName="registry-server" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.140826 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535782-f82dq" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.142816 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.142976 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.143358 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.146947 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535782-f82dq"] Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.235099 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvj7z\" (UniqueName: \"kubernetes.io/projected/d36706f6-732f-4956-8366-2e047b37f79b-kube-api-access-hvj7z\") pod \"auto-csr-approver-29535782-f82dq\" (UID: \"d36706f6-732f-4956-8366-2e047b37f79b\") " pod="openshift-infra/auto-csr-approver-29535782-f82dq" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.336982 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvj7z\" (UniqueName: \"kubernetes.io/projected/d36706f6-732f-4956-8366-2e047b37f79b-kube-api-access-hvj7z\") pod \"auto-csr-approver-29535782-f82dq\" (UID: \"d36706f6-732f-4956-8366-2e047b37f79b\") " pod="openshift-infra/auto-csr-approver-29535782-f82dq" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.358883 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvj7z\" (UniqueName: \"kubernetes.io/projected/d36706f6-732f-4956-8366-2e047b37f79b-kube-api-access-hvj7z\") pod \"auto-csr-approver-29535782-f82dq\" (UID: \"d36706f6-732f-4956-8366-2e047b37f79b\") " pod="openshift-infra/auto-csr-approver-29535782-f82dq" Feb 26 23:02:00 crc kubenswrapper[4910]: I0226 23:02:00.458879 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535782-f82dq" Feb 26 23:02:01 crc kubenswrapper[4910]: I0226 23:02:01.003455 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535782-f82dq"] Feb 26 23:02:01 crc kubenswrapper[4910]: I0226 23:02:01.073317 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535782-f82dq" event={"ID":"d36706f6-732f-4956-8366-2e047b37f79b","Type":"ContainerStarted","Data":"6ff59928fb15256a1e49a0836bd740b86483b3b6109aae91d9b066f357cc91ff"} Feb 26 23:02:02 crc kubenswrapper[4910]: I0226 23:02:02.901724 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:02:02 crc kubenswrapper[4910]: E0226 23:02:02.902565 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:02:03 crc kubenswrapper[4910]: I0226 23:02:03.094357 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535782-f82dq" event={"ID":"d36706f6-732f-4956-8366-2e047b37f79b","Type":"ContainerStarted","Data":"4d7025693ee43626144bf44ec4e5d6ce656e94fac4fe9ec67c0accc0dfd15304"} Feb 26 23:02:03 crc kubenswrapper[4910]: I0226 23:02:03.558373 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d_bebc2749-d3eb-4894-8c1b-14271b6c1f9c/util/0.log" Feb 26 23:02:03 crc kubenswrapper[4910]: I0226 23:02:03.769949 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d_bebc2749-d3eb-4894-8c1b-14271b6c1f9c/util/0.log" Feb 26 23:02:03 crc kubenswrapper[4910]: I0226 23:02:03.914890 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d_bebc2749-d3eb-4894-8c1b-14271b6c1f9c/pull/0.log" Feb 26 23:02:03 crc kubenswrapper[4910]: I0226 23:02:03.915376 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d_bebc2749-d3eb-4894-8c1b-14271b6c1f9c/pull/0.log" Feb 26 23:02:04 crc kubenswrapper[4910]: I0226 23:02:04.104133 4910 generic.go:334] "Generic (PLEG): container finished" podID="d36706f6-732f-4956-8366-2e047b37f79b" containerID="4d7025693ee43626144bf44ec4e5d6ce656e94fac4fe9ec67c0accc0dfd15304" exitCode=0 Feb 26 23:02:04 crc kubenswrapper[4910]: I0226 23:02:04.104192 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535782-f82dq" event={"ID":"d36706f6-732f-4956-8366-2e047b37f79b","Type":"ContainerDied","Data":"4d7025693ee43626144bf44ec4e5d6ce656e94fac4fe9ec67c0accc0dfd15304"} Feb 26 23:02:04 crc kubenswrapper[4910]: I0226 23:02:04.123437 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d_bebc2749-d3eb-4894-8c1b-14271b6c1f9c/extract/0.log" Feb 26 23:02:04 crc kubenswrapper[4910]: I0226 23:02:04.137719 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d_bebc2749-d3eb-4894-8c1b-14271b6c1f9c/util/0.log" Feb 26 23:02:04 crc kubenswrapper[4910]: I0226 23:02:04.210251 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qhv4d_bebc2749-d3eb-4894-8c1b-14271b6c1f9c/pull/0.log" Feb 26 23:02:04 crc kubenswrapper[4910]: I0226 23:02:04.549685 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds_5c1b6f45-ddca-4044-8f64-46f03abaa37a/util/0.log" Feb 26 23:02:04 crc kubenswrapper[4910]: I0226 23:02:04.752261 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds_5c1b6f45-ddca-4044-8f64-46f03abaa37a/util/0.log" Feb 26 23:02:04 crc kubenswrapper[4910]: I0226 23:02:04.755716 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds_5c1b6f45-ddca-4044-8f64-46f03abaa37a/pull/0.log" Feb 26 23:02:04 crc kubenswrapper[4910]: I0226 23:02:04.803755 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds_5c1b6f45-ddca-4044-8f64-46f03abaa37a/pull/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.030812 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds_5c1b6f45-ddca-4044-8f64-46f03abaa37a/extract/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.111808 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds_5c1b6f45-ddca-4044-8f64-46f03abaa37a/pull/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.121595 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jrdds_5c1b6f45-ddca-4044-8f64-46f03abaa37a/util/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.210108 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq_3e61dd1c-2bee-4f26-bb96-aa07cce78d28/util/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.485177 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq_3e61dd1c-2bee-4f26-bb96-aa07cce78d28/util/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.487468 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq_3e61dd1c-2bee-4f26-bb96-aa07cce78d28/pull/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.513292 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq_3e61dd1c-2bee-4f26-bb96-aa07cce78d28/pull/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.617559 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq_3e61dd1c-2bee-4f26-bb96-aa07cce78d28/util/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.666401 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535782-f82dq" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.699611 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq_3e61dd1c-2bee-4f26-bb96-aa07cce78d28/pull/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.752871 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvj7z\" (UniqueName: \"kubernetes.io/projected/d36706f6-732f-4956-8366-2e047b37f79b-kube-api-access-hvj7z\") pod \"d36706f6-732f-4956-8366-2e047b37f79b\" (UID: \"d36706f6-732f-4956-8366-2e047b37f79b\") " Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.779494 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36706f6-732f-4956-8366-2e047b37f79b-kube-api-access-hvj7z" (OuterVolumeSpecName: "kube-api-access-hvj7z") pod "d36706f6-732f-4956-8366-2e047b37f79b" (UID: "d36706f6-732f-4956-8366-2e047b37f79b"). InnerVolumeSpecName "kube-api-access-hvj7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.855488 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvj7z\" (UniqueName: \"kubernetes.io/projected/d36706f6-732f-4956-8366-2e047b37f79b-kube-api-access-hvj7z\") on node \"crc\" DevicePath \"\"" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.901086 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xmrzq_3e61dd1c-2bee-4f26-bb96-aa07cce78d28/extract/0.log" Feb 26 23:02:05 crc kubenswrapper[4910]: I0226 23:02:05.955524 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7qrt8_26a835fe-b180-43c8-a85c-a4e15f2573e7/extract-utilities/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.121369 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535782-f82dq" event={"ID":"d36706f6-732f-4956-8366-2e047b37f79b","Type":"ContainerDied","Data":"6ff59928fb15256a1e49a0836bd740b86483b3b6109aae91d9b066f357cc91ff"} Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.121411 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff59928fb15256a1e49a0836bd740b86483b3b6109aae91d9b066f357cc91ff" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.121416 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535782-f82dq" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.164782 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7qrt8_26a835fe-b180-43c8-a85c-a4e15f2573e7/extract-utilities/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.199703 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7qrt8_26a835fe-b180-43c8-a85c-a4e15f2573e7/extract-content/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.206928 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7qrt8_26a835fe-b180-43c8-a85c-a4e15f2573e7/extract-content/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.367218 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7qrt8_26a835fe-b180-43c8-a85c-a4e15f2573e7/extract-content/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.378492 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7qrt8_26a835fe-b180-43c8-a85c-a4e15f2573e7/extract-utilities/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.564027 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jsmnt_1a507c42-00d6-442d-a433-4e9f89e6dedc/extract-utilities/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.737532 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535776-7872p"] Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.750964 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535776-7872p"] Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.789737 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jsmnt_1a507c42-00d6-442d-a433-4e9f89e6dedc/extract-utilities/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.822923 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jsmnt_1a507c42-00d6-442d-a433-4e9f89e6dedc/extract-content/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.849629 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jsmnt_1a507c42-00d6-442d-a433-4e9f89e6dedc/extract-content/0.log" Feb 26 23:02:06 crc kubenswrapper[4910]: I0226 23:02:06.905772 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7qrt8_26a835fe-b180-43c8-a85c-a4e15f2573e7/registry-server/0.log" Feb 26 23:02:07 crc kubenswrapper[4910]: I0226 23:02:07.092610 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jsmnt_1a507c42-00d6-442d-a433-4e9f89e6dedc/extract-content/0.log" Feb 26 23:02:07 crc kubenswrapper[4910]: I0226 23:02:07.092661 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jsmnt_1a507c42-00d6-442d-a433-4e9f89e6dedc/extract-utilities/0.log" Feb 26 23:02:07 crc kubenswrapper[4910]: I0226 23:02:07.383868 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q_640ad168-330e-401c-8da5-650fbd1a8151/util/0.log" Feb 26 23:02:07 crc kubenswrapper[4910]: I0226 23:02:07.512091 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q_640ad168-330e-401c-8da5-650fbd1a8151/util/0.log" Feb 26 23:02:07 crc kubenswrapper[4910]: I0226 23:02:07.667432 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q_640ad168-330e-401c-8da5-650fbd1a8151/pull/0.log" Feb 26 23:02:07 crc kubenswrapper[4910]: I0226 23:02:07.667512 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q_640ad168-330e-401c-8da5-650fbd1a8151/pull/0.log" Feb 26 23:02:07 crc kubenswrapper[4910]: I0226 23:02:07.858470 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q_640ad168-330e-401c-8da5-650fbd1a8151/extract/0.log" Feb 26 23:02:07 crc kubenswrapper[4910]: I0226 23:02:07.912379 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aea1841-f34c-4d2a-8bbf-b1439a9ac745" path="/var/lib/kubelet/pods/6aea1841-f34c-4d2a-8bbf-b1439a9ac745/volumes" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.065306 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q_640ad168-330e-401c-8da5-650fbd1a8151/util/0.log" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.121954 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-djrbn_c3e33226-da7e-4023-b932-36308bb5ccad/marketplace-operator/0.log" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.195394 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jsmnt_1a507c42-00d6-442d-a433-4e9f89e6dedc/registry-server/0.log" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.224315 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4nc94q_640ad168-330e-401c-8da5-650fbd1a8151/pull/0.log" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.542112 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pm5ll_9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82/extract-utilities/0.log" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.708131 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pm5ll_9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82/extract-utilities/0.log" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.742083 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pm5ll_9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82/extract-content/0.log" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.750901 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pm5ll_9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82/extract-content/0.log" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.894745 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pm5ll_9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82/extract-utilities/0.log" Feb 26 23:02:08 crc kubenswrapper[4910]: I0226 23:02:08.909596 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pm5ll_9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82/extract-content/0.log" Feb 26 23:02:09 crc kubenswrapper[4910]: I0226 23:02:09.020742 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mptdj_a823f571-9d20-4be8-b0e2-6c71d5437ddf/extract-utilities/0.log" Feb 26 23:02:09 crc kubenswrapper[4910]: I0226 23:02:09.064526 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pm5ll_9ee55d7c-7dac-4d67-9c5f-de60ebb6ad82/registry-server/0.log" Feb 26 23:02:09 crc kubenswrapper[4910]: I0226 23:02:09.139712 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mptdj_a823f571-9d20-4be8-b0e2-6c71d5437ddf/extract-utilities/0.log" Feb 26 23:02:09 crc kubenswrapper[4910]: I0226 23:02:09.160290 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mptdj_a823f571-9d20-4be8-b0e2-6c71d5437ddf/extract-content/0.log" Feb 26 23:02:09 crc kubenswrapper[4910]: I0226 23:02:09.222010 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mptdj_a823f571-9d20-4be8-b0e2-6c71d5437ddf/extract-content/0.log" Feb 26 23:02:09 crc kubenswrapper[4910]: I0226 23:02:09.392778 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mptdj_a823f571-9d20-4be8-b0e2-6c71d5437ddf/extract-content/0.log" Feb 26 23:02:09 crc kubenswrapper[4910]: I0226 23:02:09.423243 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mptdj_a823f571-9d20-4be8-b0e2-6c71d5437ddf/extract-utilities/0.log" Feb 26 23:02:09 crc kubenswrapper[4910]: I0226 23:02:09.972928 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mptdj_a823f571-9d20-4be8-b0e2-6c71d5437ddf/registry-server/0.log" Feb 26 23:02:14 crc kubenswrapper[4910]: I0226 23:02:14.901259 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:02:14 crc kubenswrapper[4910]: E0226 23:02:14.902009 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:02:24 crc kubenswrapper[4910]: I0226 23:02:24.214989 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-67769cc9c-pmzxx_451b4f8d-8570-479f-bb0e-ddbb695bf345/prometheus-operator-admission-webhook/0.log" Feb 26 23:02:24 crc kubenswrapper[4910]: I0226 23:02:24.260172 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-67769cc9c-n7fjf_e29777de-7955-4e02-88fd-51c42f732421/prometheus-operator-admission-webhook/0.log" Feb 26 23:02:24 crc kubenswrapper[4910]: I0226 23:02:24.289496 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qxq9v_0bbb4449-bb9e-4d59-9b01-10b3180055c0/prometheus-operator/0.log" Feb 26 23:02:24 crc kubenswrapper[4910]: I0226 23:02:24.444820 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-z5vhp_99ab363a-bae9-4e7d-9b11-668cbde4a8d3/operator/0.log" Feb 26 23:02:24 crc kubenswrapper[4910]: I0226 23:02:24.464147 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gv5qh_c4e1f736-965a-4540-b006-4138cb8f08ad/perses-operator/0.log" Feb 26 23:02:26 crc kubenswrapper[4910]: I0226 23:02:26.902003 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:02:26 crc kubenswrapper[4910]: E0226 23:02:26.903480 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:02:39 crc kubenswrapper[4910]: I0226 23:02:39.714182 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6f8f9c794b-f7dsr_31762fdd-32c2-4dd0-b121-814205d69874/manager/0.log" Feb 26 23:02:39 crc kubenswrapper[4910]: I0226 23:02:39.714349 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6f8f9c794b-f7dsr_31762fdd-32c2-4dd0-b121-814205d69874/kube-rbac-proxy/0.log" Feb 26 23:02:41 crc kubenswrapper[4910]: I0226 23:02:41.046383 4910 scope.go:117] "RemoveContainer" containerID="eeae3d7f4e60d8b5981e05006e8a7218251db5212f4b19ee1d4dfc3fc2df0465" Feb 26 23:02:41 crc kubenswrapper[4910]: I0226 23:02:41.902080 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:02:41 crc kubenswrapper[4910]: E0226 23:02:41.902614 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:02:53 crc kubenswrapper[4910]: I0226 23:02:53.904067 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:02:53 crc kubenswrapper[4910]: E0226 23:02:53.904799 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:03:04 crc kubenswrapper[4910]: I0226 23:03:04.903057 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:03:04 crc kubenswrapper[4910]: E0226 23:03:04.904301 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:03:19 crc kubenswrapper[4910]: I0226 23:03:19.902240 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:03:19 crc kubenswrapper[4910]: E0226 23:03:19.903479 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:03:32 crc kubenswrapper[4910]: I0226 23:03:32.902074 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:03:32 crc kubenswrapper[4910]: E0226 23:03:32.903507 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:03:46 crc kubenswrapper[4910]: I0226 23:03:46.902306 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:03:46 crc kubenswrapper[4910]: E0226 23:03:46.903087 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:03:57 crc kubenswrapper[4910]: I0226 23:03:57.902810 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:03:57 crc kubenswrapper[4910]: E0226 23:03:57.904011 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.168708 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535784-5gjvx"] Feb 26 23:04:00 crc kubenswrapper[4910]: E0226 23:04:00.169898 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36706f6-732f-4956-8366-2e047b37f79b" containerName="oc" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.169925 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36706f6-732f-4956-8366-2e047b37f79b" containerName="oc" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.170571 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36706f6-732f-4956-8366-2e047b37f79b" containerName="oc" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.172046 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535784-5gjvx" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.176016 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.176598 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.176611 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.191034 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535784-5gjvx"] Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.285981 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh25l\" (UniqueName: \"kubernetes.io/projected/9d380cac-a80b-4742-97b3-6aeafb9e0052-kube-api-access-vh25l\") pod \"auto-csr-approver-29535784-5gjvx\" (UID: \"9d380cac-a80b-4742-97b3-6aeafb9e0052\") " pod="openshift-infra/auto-csr-approver-29535784-5gjvx" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.388623 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh25l\" (UniqueName: \"kubernetes.io/projected/9d380cac-a80b-4742-97b3-6aeafb9e0052-kube-api-access-vh25l\") pod \"auto-csr-approver-29535784-5gjvx\" (UID: \"9d380cac-a80b-4742-97b3-6aeafb9e0052\") " pod="openshift-infra/auto-csr-approver-29535784-5gjvx" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.422320 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh25l\" (UniqueName: \"kubernetes.io/projected/9d380cac-a80b-4742-97b3-6aeafb9e0052-kube-api-access-vh25l\") pod \"auto-csr-approver-29535784-5gjvx\" (UID: \"9d380cac-a80b-4742-97b3-6aeafb9e0052\") " pod="openshift-infra/auto-csr-approver-29535784-5gjvx" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.495927 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535784-5gjvx" Feb 26 23:04:00 crc kubenswrapper[4910]: I0226 23:04:00.948145 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535784-5gjvx"] Feb 26 23:04:01 crc kubenswrapper[4910]: I0226 23:04:01.618123 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535784-5gjvx" event={"ID":"9d380cac-a80b-4742-97b3-6aeafb9e0052","Type":"ContainerStarted","Data":"37ccac583aa3f65cda6057ab4dfbfd0e07092be14fc106c7985b9505ffe9321b"} Feb 26 23:04:02 crc kubenswrapper[4910]: I0226 23:04:02.628273 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535784-5gjvx" event={"ID":"9d380cac-a80b-4742-97b3-6aeafb9e0052","Type":"ContainerStarted","Data":"fcbee125f51609ede7b4e4dbccd33905cdab3feab4fc3c6f47eeaf5ac9fdbb02"} Feb 26 23:04:02 crc kubenswrapper[4910]: I0226 23:04:02.657629 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535784-5gjvx" podStartSLOduration=1.678600015 podStartE2EDuration="2.657606481s" podCreationTimestamp="2026-02-26 23:04:00 +0000 UTC" firstStartedPulling="2026-02-26 23:04:01.097215682 +0000 UTC m=+4126.176706223" lastFinishedPulling="2026-02-26 23:04:02.076222148 +0000 UTC m=+4127.155712689" observedRunningTime="2026-02-26 23:04:02.643625932 +0000 UTC m=+4127.723116493" watchObservedRunningTime="2026-02-26 23:04:02.657606481 +0000 UTC m=+4127.737097032" Feb 26 23:04:03 crc kubenswrapper[4910]: I0226 23:04:03.649049 4910 generic.go:334] "Generic (PLEG): container finished" podID="9d380cac-a80b-4742-97b3-6aeafb9e0052" containerID="fcbee125f51609ede7b4e4dbccd33905cdab3feab4fc3c6f47eeaf5ac9fdbb02" exitCode=0 Feb 26 23:04:03 crc kubenswrapper[4910]: I0226 23:04:03.649261 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535784-5gjvx" event={"ID":"9d380cac-a80b-4742-97b3-6aeafb9e0052","Type":"ContainerDied","Data":"fcbee125f51609ede7b4e4dbccd33905cdab3feab4fc3c6f47eeaf5ac9fdbb02"} Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.202655 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535784-5gjvx" Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.296205 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh25l\" (UniqueName: \"kubernetes.io/projected/9d380cac-a80b-4742-97b3-6aeafb9e0052-kube-api-access-vh25l\") pod \"9d380cac-a80b-4742-97b3-6aeafb9e0052\" (UID: \"9d380cac-a80b-4742-97b3-6aeafb9e0052\") " Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.310081 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d380cac-a80b-4742-97b3-6aeafb9e0052-kube-api-access-vh25l" (OuterVolumeSpecName: "kube-api-access-vh25l") pod "9d380cac-a80b-4742-97b3-6aeafb9e0052" (UID: "9d380cac-a80b-4742-97b3-6aeafb9e0052"). InnerVolumeSpecName "kube-api-access-vh25l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.398683 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh25l\" (UniqueName: \"kubernetes.io/projected/9d380cac-a80b-4742-97b3-6aeafb9e0052-kube-api-access-vh25l\") on node \"crc\" DevicePath \"\"" Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.683119 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535784-5gjvx" Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.683010 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535784-5gjvx" event={"ID":"9d380cac-a80b-4742-97b3-6aeafb9e0052","Type":"ContainerDied","Data":"37ccac583aa3f65cda6057ab4dfbfd0e07092be14fc106c7985b9505ffe9321b"} Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.683709 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37ccac583aa3f65cda6057ab4dfbfd0e07092be14fc106c7985b9505ffe9321b" Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.737417 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535778-nshc2"] Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.744459 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535778-nshc2"] Feb 26 23:04:05 crc kubenswrapper[4910]: I0226 23:04:05.915406 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133d6b21-57a2-41dd-a617-548380fb2754" path="/var/lib/kubelet/pods/133d6b21-57a2-41dd-a617-548380fb2754/volumes" Feb 26 23:04:12 crc kubenswrapper[4910]: I0226 23:04:12.901637 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:04:12 crc kubenswrapper[4910]: E0226 23:04:12.902779 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:04:26 crc kubenswrapper[4910]: I0226 23:04:26.902266 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:04:26 crc kubenswrapper[4910]: E0226 23:04:26.903388 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:04:31 crc kubenswrapper[4910]: I0226 23:04:31.989861 4910 generic.go:334] "Generic (PLEG): container finished" podID="51be9c06-a324-4431-8114-2a8d2cc41902" containerID="5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f" exitCode=0 Feb 26 23:04:31 crc kubenswrapper[4910]: I0226 23:04:31.989976 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" event={"ID":"51be9c06-a324-4431-8114-2a8d2cc41902","Type":"ContainerDied","Data":"5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f"} Feb 26 23:04:31 crc kubenswrapper[4910]: I0226 23:04:31.991149 4910 scope.go:117] "RemoveContainer" containerID="5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f" Feb 26 23:04:32 crc kubenswrapper[4910]: I0226 23:04:32.225643 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bjtlp_must-gather-5tjg4_51be9c06-a324-4431-8114-2a8d2cc41902/gather/0.log" Feb 26 23:04:37 crc kubenswrapper[4910]: I0226 23:04:37.901776 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:04:37 crc kubenswrapper[4910]: E0226 23:04:37.903138 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:04:39 crc kubenswrapper[4910]: I0226 23:04:39.935023 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bjtlp/must-gather-5tjg4"] Feb 26 23:04:39 crc kubenswrapper[4910]: I0226 23:04:39.935812 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" podUID="51be9c06-a324-4431-8114-2a8d2cc41902" containerName="copy" containerID="cri-o://36439b590c190c594636bb59d49ea4324da2e4cc1fa467189b0555803ac81dfc" gracePeriod=2 Feb 26 23:04:39 crc kubenswrapper[4910]: I0226 23:04:39.951487 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bjtlp/must-gather-5tjg4"] Feb 26 23:04:40 crc kubenswrapper[4910]: I0226 23:04:40.079374 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bjtlp_must-gather-5tjg4_51be9c06-a324-4431-8114-2a8d2cc41902/copy/0.log" Feb 26 23:04:40 crc kubenswrapper[4910]: I0226 23:04:40.080710 4910 generic.go:334] "Generic (PLEG): container finished" podID="51be9c06-a324-4431-8114-2a8d2cc41902" containerID="36439b590c190c594636bb59d49ea4324da2e4cc1fa467189b0555803ac81dfc" exitCode=143 Feb 26 23:04:40 crc kubenswrapper[4910]: I0226 23:04:40.907434 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bjtlp_must-gather-5tjg4_51be9c06-a324-4431-8114-2a8d2cc41902/copy/0.log" Feb 26 23:04:40 crc kubenswrapper[4910]: I0226 23:04:40.908116 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 23:04:40 crc kubenswrapper[4910]: I0226 23:04:40.993539 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51be9c06-a324-4431-8114-2a8d2cc41902-must-gather-output\") pod \"51be9c06-a324-4431-8114-2a8d2cc41902\" (UID: \"51be9c06-a324-4431-8114-2a8d2cc41902\") " Feb 26 23:04:40 crc kubenswrapper[4910]: I0226 23:04:40.993685 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds72n\" (UniqueName: \"kubernetes.io/projected/51be9c06-a324-4431-8114-2a8d2cc41902-kube-api-access-ds72n\") pod \"51be9c06-a324-4431-8114-2a8d2cc41902\" (UID: \"51be9c06-a324-4431-8114-2a8d2cc41902\") " Feb 26 23:04:40 crc kubenswrapper[4910]: I0226 23:04:40.999815 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51be9c06-a324-4431-8114-2a8d2cc41902-kube-api-access-ds72n" (OuterVolumeSpecName: "kube-api-access-ds72n") pod "51be9c06-a324-4431-8114-2a8d2cc41902" (UID: "51be9c06-a324-4431-8114-2a8d2cc41902"). InnerVolumeSpecName "kube-api-access-ds72n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.093523 4910 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bjtlp_must-gather-5tjg4_51be9c06-a324-4431-8114-2a8d2cc41902/copy/0.log" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.094636 4910 scope.go:117] "RemoveContainer" containerID="36439b590c190c594636bb59d49ea4324da2e4cc1fa467189b0555803ac81dfc" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.094756 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjtlp/must-gather-5tjg4" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.098530 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds72n\" (UniqueName: \"kubernetes.io/projected/51be9c06-a324-4431-8114-2a8d2cc41902-kube-api-access-ds72n\") on node \"crc\" DevicePath \"\"" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.130347 4910 scope.go:117] "RemoveContainer" containerID="5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.150252 4910 scope.go:117] "RemoveContainer" containerID="438285643c2b73f5069f72242324d5aa1501d24a52d98cd0e4ef6f09e2e09f7f" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.206240 4910 scope.go:117] "RemoveContainer" containerID="e21a5fd2bc7b1b9abb56239d04bd4a61e207cdc6ff594f89cd3d50871c5871ff" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.238074 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51be9c06-a324-4431-8114-2a8d2cc41902-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "51be9c06-a324-4431-8114-2a8d2cc41902" (UID: "51be9c06-a324-4431-8114-2a8d2cc41902"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.296858 4910 scope.go:117] "RemoveContainer" containerID="5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f" Feb 26 23:04:41 crc kubenswrapper[4910]: E0226 23:04:41.297466 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f\": container with ID starting with 5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f not found: ID does not exist" containerID="5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f" Feb 26 23:04:41 crc kubenswrapper[4910]: E0226 23:04:41.297513 4910 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f\": rpc error: code = NotFound desc = could not find container \"5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f\": container with ID starting with 5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f not found: ID does not exist" containerID="5a5b6b12da77d2d986803da94b4b5e36b7ee7616d2d862210420009994894c2f" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.302180 4910 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51be9c06-a324-4431-8114-2a8d2cc41902-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 26 23:04:41 crc kubenswrapper[4910]: I0226 23:04:41.911362 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51be9c06-a324-4431-8114-2a8d2cc41902" path="/var/lib/kubelet/pods/51be9c06-a324-4431-8114-2a8d2cc41902/volumes" Feb 26 23:04:49 crc kubenswrapper[4910]: I0226 23:04:49.902217 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:04:49 crc kubenswrapper[4910]: E0226 23:04:49.903460 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:04:59 crc kubenswrapper[4910]: I0226 23:04:59.992405 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6t9pb"] Feb 26 23:04:59 crc kubenswrapper[4910]: E0226 23:04:59.993514 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d380cac-a80b-4742-97b3-6aeafb9e0052" containerName="oc" Feb 26 23:04:59 crc kubenswrapper[4910]: I0226 23:04:59.993531 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d380cac-a80b-4742-97b3-6aeafb9e0052" containerName="oc" Feb 26 23:04:59 crc kubenswrapper[4910]: E0226 23:04:59.993558 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51be9c06-a324-4431-8114-2a8d2cc41902" containerName="gather" Feb 26 23:04:59 crc kubenswrapper[4910]: I0226 23:04:59.993566 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="51be9c06-a324-4431-8114-2a8d2cc41902" containerName="gather" Feb 26 23:04:59 crc kubenswrapper[4910]: E0226 23:04:59.993587 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51be9c06-a324-4431-8114-2a8d2cc41902" containerName="copy" Feb 26 23:04:59 crc kubenswrapper[4910]: I0226 23:04:59.993597 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="51be9c06-a324-4431-8114-2a8d2cc41902" containerName="copy" Feb 26 23:04:59 crc kubenswrapper[4910]: I0226 23:04:59.993855 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="51be9c06-a324-4431-8114-2a8d2cc41902" containerName="copy" Feb 26 23:04:59 crc kubenswrapper[4910]: I0226 23:04:59.993871 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="51be9c06-a324-4431-8114-2a8d2cc41902" containerName="gather" Feb 26 23:04:59 crc kubenswrapper[4910]: I0226 23:04:59.993903 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d380cac-a80b-4742-97b3-6aeafb9e0052" containerName="oc" Feb 26 23:04:59 crc kubenswrapper[4910]: I0226 23:04:59.995902 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.007811 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t9pb"] Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.154867 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-catalog-content\") pod \"redhat-operators-6t9pb\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.155040 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-utilities\") pod \"redhat-operators-6t9pb\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.155127 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4tnb\" (UniqueName: \"kubernetes.io/projected/26d2b8a2-0d3b-49d4-af0a-ceddba308720-kube-api-access-s4tnb\") pod \"redhat-operators-6t9pb\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.257251 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-catalog-content\") pod \"redhat-operators-6t9pb\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.257385 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-utilities\") pod \"redhat-operators-6t9pb\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.257476 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4tnb\" (UniqueName: \"kubernetes.io/projected/26d2b8a2-0d3b-49d4-af0a-ceddba308720-kube-api-access-s4tnb\") pod \"redhat-operators-6t9pb\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.258519 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-catalog-content\") pod \"redhat-operators-6t9pb\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.258618 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-utilities\") pod \"redhat-operators-6t9pb\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.279674 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4tnb\" (UniqueName: \"kubernetes.io/projected/26d2b8a2-0d3b-49d4-af0a-ceddba308720-kube-api-access-s4tnb\") pod \"redhat-operators-6t9pb\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.335874 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:00 crc kubenswrapper[4910]: W0226 23:05:00.816729 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26d2b8a2_0d3b_49d4_af0a_ceddba308720.slice/crio-98ab5e4c6677703187224adaa0eba99ea53b41599365773d8df4421c1280512e WatchSource:0}: Error finding container 98ab5e4c6677703187224adaa0eba99ea53b41599365773d8df4421c1280512e: Status 404 returned error can't find the container with id 98ab5e4c6677703187224adaa0eba99ea53b41599365773d8df4421c1280512e Feb 26 23:05:00 crc kubenswrapper[4910]: I0226 23:05:00.831328 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t9pb"] Feb 26 23:05:01 crc kubenswrapper[4910]: I0226 23:05:01.663023 4910 generic.go:334] "Generic (PLEG): container finished" podID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerID="27fa265f174d0193794697df0896260cef57214ffbf9fb00a4997a7a9b8bad6e" exitCode=0 Feb 26 23:05:01 crc kubenswrapper[4910]: I0226 23:05:01.663192 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t9pb" event={"ID":"26d2b8a2-0d3b-49d4-af0a-ceddba308720","Type":"ContainerDied","Data":"27fa265f174d0193794697df0896260cef57214ffbf9fb00a4997a7a9b8bad6e"} Feb 26 23:05:01 crc kubenswrapper[4910]: I0226 23:05:01.663347 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t9pb" event={"ID":"26d2b8a2-0d3b-49d4-af0a-ceddba308720","Type":"ContainerStarted","Data":"98ab5e4c6677703187224adaa0eba99ea53b41599365773d8df4421c1280512e"} Feb 26 23:05:02 crc kubenswrapper[4910]: I0226 23:05:02.902430 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:05:02 crc kubenswrapper[4910]: E0226 23:05:02.902814 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:05:03 crc kubenswrapper[4910]: I0226 23:05:03.697148 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t9pb" event={"ID":"26d2b8a2-0d3b-49d4-af0a-ceddba308720","Type":"ContainerStarted","Data":"47ce183322a35a7c365229b26874eec0466c27de1a5c4c25f8374ea52da4713a"} Feb 26 23:05:06 crc kubenswrapper[4910]: I0226 23:05:06.735067 4910 generic.go:334] "Generic (PLEG): container finished" podID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerID="47ce183322a35a7c365229b26874eec0466c27de1a5c4c25f8374ea52da4713a" exitCode=0 Feb 26 23:05:06 crc kubenswrapper[4910]: I0226 23:05:06.735319 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t9pb" event={"ID":"26d2b8a2-0d3b-49d4-af0a-ceddba308720","Type":"ContainerDied","Data":"47ce183322a35a7c365229b26874eec0466c27de1a5c4c25f8374ea52da4713a"} Feb 26 23:05:08 crc kubenswrapper[4910]: I0226 23:05:08.759116 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t9pb" event={"ID":"26d2b8a2-0d3b-49d4-af0a-ceddba308720","Type":"ContainerStarted","Data":"fe156ae766f6fdd1f8ce757ab263bca80135cb6fd98cd8bd6925a768b309db26"} Feb 26 23:05:08 crc kubenswrapper[4910]: I0226 23:05:08.782518 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6t9pb" podStartSLOduration=4.119870545 podStartE2EDuration="9.78250205s" podCreationTimestamp="2026-02-26 23:04:59 +0000 UTC" firstStartedPulling="2026-02-26 23:05:01.664934546 +0000 UTC m=+4186.744425087" lastFinishedPulling="2026-02-26 23:05:07.327566041 +0000 UTC m=+4192.407056592" observedRunningTime="2026-02-26 23:05:08.780181156 +0000 UTC m=+4193.859671697" watchObservedRunningTime="2026-02-26 23:05:08.78250205 +0000 UTC m=+4193.861992591" Feb 26 23:05:10 crc kubenswrapper[4910]: I0226 23:05:10.336249 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:10 crc kubenswrapper[4910]: I0226 23:05:10.337620 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:11 crc kubenswrapper[4910]: I0226 23:05:11.410545 4910 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6t9pb" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerName="registry-server" probeResult="failure" output=< Feb 26 23:05:11 crc kubenswrapper[4910]: timeout: failed to connect service ":50051" within 1s Feb 26 23:05:11 crc kubenswrapper[4910]: > Feb 26 23:05:16 crc kubenswrapper[4910]: I0226 23:05:16.901734 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:05:16 crc kubenswrapper[4910]: E0226 23:05:16.902706 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.710241 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6chl"] Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.712618 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.723418 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6chl"] Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.862458 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52527\" (UniqueName: \"kubernetes.io/projected/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-kube-api-access-52527\") pod \"certified-operators-w6chl\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.862571 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-catalog-content\") pod \"certified-operators-w6chl\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.862634 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-utilities\") pod \"certified-operators-w6chl\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.964241 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-catalog-content\") pod \"certified-operators-w6chl\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.964344 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-utilities\") pod \"certified-operators-w6chl\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.964507 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52527\" (UniqueName: \"kubernetes.io/projected/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-kube-api-access-52527\") pod \"certified-operators-w6chl\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.964764 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-utilities\") pod \"certified-operators-w6chl\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.965019 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-catalog-content\") pod \"certified-operators-w6chl\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:18 crc kubenswrapper[4910]: I0226 23:05:18.988017 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52527\" (UniqueName: \"kubernetes.io/projected/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-kube-api-access-52527\") pod \"certified-operators-w6chl\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:19 crc kubenswrapper[4910]: I0226 23:05:19.032274 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:19 crc kubenswrapper[4910]: W0226 23:05:19.514642 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd967d78a_4b9b_40d5_b7f2_7d83c2bdc3f5.slice/crio-99e9ecfaa6eb56a5cc20be1f383b98fae8ccf1fa1847585596bf0d0da7fd78e1 WatchSource:0}: Error finding container 99e9ecfaa6eb56a5cc20be1f383b98fae8ccf1fa1847585596bf0d0da7fd78e1: Status 404 returned error can't find the container with id 99e9ecfaa6eb56a5cc20be1f383b98fae8ccf1fa1847585596bf0d0da7fd78e1 Feb 26 23:05:19 crc kubenswrapper[4910]: I0226 23:05:19.514741 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6chl"] Feb 26 23:05:19 crc kubenswrapper[4910]: I0226 23:05:19.879839 4910 generic.go:334] "Generic (PLEG): container finished" podID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerID="bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524" exitCode=0 Feb 26 23:05:19 crc kubenswrapper[4910]: I0226 23:05:19.879892 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6chl" event={"ID":"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5","Type":"ContainerDied","Data":"bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524"} Feb 26 23:05:19 crc kubenswrapper[4910]: I0226 23:05:19.879922 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6chl" event={"ID":"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5","Type":"ContainerStarted","Data":"99e9ecfaa6eb56a5cc20be1f383b98fae8ccf1fa1847585596bf0d0da7fd78e1"} Feb 26 23:05:20 crc kubenswrapper[4910]: I0226 23:05:20.384242 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:20 crc kubenswrapper[4910]: I0226 23:05:20.444718 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:20 crc kubenswrapper[4910]: I0226 23:05:20.891712 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6chl" event={"ID":"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5","Type":"ContainerStarted","Data":"067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea"} Feb 26 23:05:22 crc kubenswrapper[4910]: I0226 23:05:22.690520 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t9pb"] Feb 26 23:05:22 crc kubenswrapper[4910]: I0226 23:05:22.690963 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6t9pb" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerName="registry-server" containerID="cri-o://fe156ae766f6fdd1f8ce757ab263bca80135cb6fd98cd8bd6925a768b309db26" gracePeriod=2 Feb 26 23:05:22 crc kubenswrapper[4910]: I0226 23:05:22.915783 4910 generic.go:334] "Generic (PLEG): container finished" podID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerID="fe156ae766f6fdd1f8ce757ab263bca80135cb6fd98cd8bd6925a768b309db26" exitCode=0 Feb 26 23:05:22 crc kubenswrapper[4910]: I0226 23:05:22.915825 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t9pb" event={"ID":"26d2b8a2-0d3b-49d4-af0a-ceddba308720","Type":"ContainerDied","Data":"fe156ae766f6fdd1f8ce757ab263bca80135cb6fd98cd8bd6925a768b309db26"} Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.800631 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.893306 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-utilities\") pod \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.893353 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4tnb\" (UniqueName: \"kubernetes.io/projected/26d2b8a2-0d3b-49d4-af0a-ceddba308720-kube-api-access-s4tnb\") pod \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.893419 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-catalog-content\") pod \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\" (UID: \"26d2b8a2-0d3b-49d4-af0a-ceddba308720\") " Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.894028 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-utilities" (OuterVolumeSpecName: "utilities") pod "26d2b8a2-0d3b-49d4-af0a-ceddba308720" (UID: "26d2b8a2-0d3b-49d4-af0a-ceddba308720"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.919085 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d2b8a2-0d3b-49d4-af0a-ceddba308720-kube-api-access-s4tnb" (OuterVolumeSpecName: "kube-api-access-s4tnb") pod "26d2b8a2-0d3b-49d4-af0a-ceddba308720" (UID: "26d2b8a2-0d3b-49d4-af0a-ceddba308720"). InnerVolumeSpecName "kube-api-access-s4tnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.938816 4910 generic.go:334] "Generic (PLEG): container finished" podID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerID="067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea" exitCode=0 Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.945093 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t9pb" Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.955707 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6chl" event={"ID":"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5","Type":"ContainerDied","Data":"067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea"} Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.955767 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t9pb" event={"ID":"26d2b8a2-0d3b-49d4-af0a-ceddba308720","Type":"ContainerDied","Data":"98ab5e4c6677703187224adaa0eba99ea53b41599365773d8df4421c1280512e"} Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.955801 4910 scope.go:117] "RemoveContainer" containerID="fe156ae766f6fdd1f8ce757ab263bca80135cb6fd98cd8bd6925a768b309db26" Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.996281 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.997221 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4tnb\" (UniqueName: \"kubernetes.io/projected/26d2b8a2-0d3b-49d4-af0a-ceddba308720-kube-api-access-s4tnb\") on node \"crc\" DevicePath \"\"" Feb 26 23:05:23 crc kubenswrapper[4910]: I0226 23:05:23.997485 4910 scope.go:117] "RemoveContainer" containerID="47ce183322a35a7c365229b26874eec0466c27de1a5c4c25f8374ea52da4713a" Feb 26 23:05:24 crc kubenswrapper[4910]: I0226 23:05:24.039981 4910 scope.go:117] "RemoveContainer" containerID="27fa265f174d0193794697df0896260cef57214ffbf9fb00a4997a7a9b8bad6e" Feb 26 23:05:24 crc kubenswrapper[4910]: I0226 23:05:24.060716 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26d2b8a2-0d3b-49d4-af0a-ceddba308720" (UID: "26d2b8a2-0d3b-49d4-af0a-ceddba308720"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 23:05:24 crc kubenswrapper[4910]: I0226 23:05:24.101563 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26d2b8a2-0d3b-49d4-af0a-ceddba308720-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 23:05:24 crc kubenswrapper[4910]: I0226 23:05:24.928325 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t9pb"] Feb 26 23:05:24 crc kubenswrapper[4910]: I0226 23:05:24.938605 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6t9pb"] Feb 26 23:05:25 crc kubenswrapper[4910]: I0226 23:05:25.926748 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" path="/var/lib/kubelet/pods/26d2b8a2-0d3b-49d4-af0a-ceddba308720/volumes" Feb 26 23:05:25 crc kubenswrapper[4910]: I0226 23:05:25.965537 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6chl" event={"ID":"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5","Type":"ContainerStarted","Data":"7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e"} Feb 26 23:05:25 crc kubenswrapper[4910]: I0226 23:05:25.993692 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6chl" podStartSLOduration=3.456269263 podStartE2EDuration="7.99367267s" podCreationTimestamp="2026-02-26 23:05:18 +0000 UTC" firstStartedPulling="2026-02-26 23:05:19.882964325 +0000 UTC m=+4204.962454866" lastFinishedPulling="2026-02-26 23:05:24.420367732 +0000 UTC m=+4209.499858273" observedRunningTime="2026-02-26 23:05:25.984555843 +0000 UTC m=+4211.064046404" watchObservedRunningTime="2026-02-26 23:05:25.99367267 +0000 UTC m=+4211.073163221" Feb 26 23:05:29 crc kubenswrapper[4910]: I0226 23:05:29.033140 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:29 crc kubenswrapper[4910]: I0226 23:05:29.035363 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:29 crc kubenswrapper[4910]: I0226 23:05:29.097770 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:30 crc kubenswrapper[4910]: I0226 23:05:30.081085 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:30 crc kubenswrapper[4910]: I0226 23:05:30.902058 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:05:30 crc kubenswrapper[4910]: E0226 23:05:30.902592 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:05:31 crc kubenswrapper[4910]: I0226 23:05:31.295398 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6chl"] Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.035239 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6chl" podUID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerName="registry-server" containerID="cri-o://7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e" gracePeriod=2 Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.710662 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.787750 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52527\" (UniqueName: \"kubernetes.io/projected/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-kube-api-access-52527\") pod \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.787849 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-catalog-content\") pod \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.787878 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-utilities\") pod \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\" (UID: \"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5\") " Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.788900 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-utilities" (OuterVolumeSpecName: "utilities") pod "d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" (UID: "d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.800919 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-kube-api-access-52527" (OuterVolumeSpecName: "kube-api-access-52527") pod "d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" (UID: "d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5"). InnerVolumeSpecName "kube-api-access-52527". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.863266 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" (UID: "d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.890022 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52527\" (UniqueName: \"kubernetes.io/projected/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-kube-api-access-52527\") on node \"crc\" DevicePath \"\"" Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.890053 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 23:05:32 crc kubenswrapper[4910]: I0226 23:05:32.890063 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.055364 4910 generic.go:334] "Generic (PLEG): container finished" podID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerID="7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e" exitCode=0 Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.055433 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6chl" event={"ID":"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5","Type":"ContainerDied","Data":"7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e"} Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.055466 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6chl" event={"ID":"d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5","Type":"ContainerDied","Data":"99e9ecfaa6eb56a5cc20be1f383b98fae8ccf1fa1847585596bf0d0da7fd78e1"} Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.055467 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6chl" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.055503 4910 scope.go:117] "RemoveContainer" containerID="7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.085096 4910 scope.go:117] "RemoveContainer" containerID="067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.098934 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6chl"] Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.111393 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6chl"] Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.116922 4910 scope.go:117] "RemoveContainer" containerID="bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.173979 4910 scope.go:117] "RemoveContainer" containerID="7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e" Feb 26 23:05:33 crc kubenswrapper[4910]: E0226 23:05:33.174708 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e\": container with ID starting with 7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e not found: ID does not exist" containerID="7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.174745 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e"} err="failed to get container status \"7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e\": rpc error: code = NotFound desc = could not find container \"7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e\": container with ID starting with 7a6e17fb1661784bbd894e1a3e062a72a69fa6325fc170d1764b2256a085209e not found: ID does not exist" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.174770 4910 scope.go:117] "RemoveContainer" containerID="067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea" Feb 26 23:05:33 crc kubenswrapper[4910]: E0226 23:05:33.175260 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea\": container with ID starting with 067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea not found: ID does not exist" containerID="067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.175318 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea"} err="failed to get container status \"067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea\": rpc error: code = NotFound desc = could not find container \"067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea\": container with ID starting with 067c971b09670b6a720d3499ee3520e76132ae1ccaafe191cb6ef0da393136ea not found: ID does not exist" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.175347 4910 scope.go:117] "RemoveContainer" containerID="bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524" Feb 26 23:05:33 crc kubenswrapper[4910]: E0226 23:05:33.175869 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524\": container with ID starting with bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524 not found: ID does not exist" containerID="bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.175891 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524"} err="failed to get container status \"bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524\": rpc error: code = NotFound desc = could not find container \"bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524\": container with ID starting with bacfa49f2879870ae1cfc0d7a94b30d619c1559c2e712c4d487a5ba305cd0524 not found: ID does not exist" Feb 26 23:05:33 crc kubenswrapper[4910]: I0226 23:05:33.920786 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" path="/var/lib/kubelet/pods/d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5/volumes" Feb 26 23:05:41 crc kubenswrapper[4910]: I0226 23:05:41.350846 4910 scope.go:117] "RemoveContainer" containerID="7733aeff5528337c757d2863bd995e45bfe286abba754700f6627e8fd0c8cb9e" Feb 26 23:05:41 crc kubenswrapper[4910]: I0226 23:05:41.392222 4910 scope.go:117] "RemoveContainer" containerID="d7645a2279bc84d117c3e88692e3c8342aa24b054c1eddef49987f15a09a076d" Feb 26 23:05:43 crc kubenswrapper[4910]: I0226 23:05:43.902355 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:05:43 crc kubenswrapper[4910]: E0226 23:05:43.903656 4910 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6xpv4_openshift-machine-config-operator(69251a00-4e6e-48f6-ae1b-d3001d22b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" Feb 26 23:05:57 crc kubenswrapper[4910]: I0226 23:05:57.901777 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:05:58 crc kubenswrapper[4910]: I0226 23:05:58.312138 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"4c8a253e3f4c6e196190b6a87e6234297c42977dcd1d3e36b9ba538a59034602"} Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.158865 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535786-8hnbm"] Feb 26 23:06:00 crc kubenswrapper[4910]: E0226 23:06:00.160240 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerName="extract-utilities" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.160256 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerName="extract-utilities" Feb 26 23:06:00 crc kubenswrapper[4910]: E0226 23:06:00.160271 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerName="extract-content" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.160277 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerName="extract-content" Feb 26 23:06:00 crc kubenswrapper[4910]: E0226 23:06:00.160296 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerName="registry-server" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.160303 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerName="registry-server" Feb 26 23:06:00 crc kubenswrapper[4910]: E0226 23:06:00.160320 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerName="registry-server" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.160326 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerName="registry-server" Feb 26 23:06:00 crc kubenswrapper[4910]: E0226 23:06:00.160337 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerName="extract-content" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.160342 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerName="extract-content" Feb 26 23:06:00 crc kubenswrapper[4910]: E0226 23:06:00.160356 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerName="extract-utilities" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.160362 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerName="extract-utilities" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.160542 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="d967d78a-4b9b-40d5-b7f2-7d83c2bdc3f5" containerName="registry-server" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.160559 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d2b8a2-0d3b-49d4-af0a-ceddba308720" containerName="registry-server" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.161295 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535786-8hnbm" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.167790 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.167973 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.168090 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.185064 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535786-8hnbm"] Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.293289 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqmmm\" (UniqueName: \"kubernetes.io/projected/35b5c8f0-9aa2-493c-9011-b3d270326171-kube-api-access-tqmmm\") pod \"auto-csr-approver-29535786-8hnbm\" (UID: \"35b5c8f0-9aa2-493c-9011-b3d270326171\") " pod="openshift-infra/auto-csr-approver-29535786-8hnbm" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.395097 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqmmm\" (UniqueName: \"kubernetes.io/projected/35b5c8f0-9aa2-493c-9011-b3d270326171-kube-api-access-tqmmm\") pod \"auto-csr-approver-29535786-8hnbm\" (UID: \"35b5c8f0-9aa2-493c-9011-b3d270326171\") " pod="openshift-infra/auto-csr-approver-29535786-8hnbm" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.417507 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqmmm\" (UniqueName: \"kubernetes.io/projected/35b5c8f0-9aa2-493c-9011-b3d270326171-kube-api-access-tqmmm\") pod \"auto-csr-approver-29535786-8hnbm\" (UID: \"35b5c8f0-9aa2-493c-9011-b3d270326171\") " pod="openshift-infra/auto-csr-approver-29535786-8hnbm" Feb 26 23:06:00 crc kubenswrapper[4910]: I0226 23:06:00.498848 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535786-8hnbm" Feb 26 23:06:01 crc kubenswrapper[4910]: I0226 23:06:01.017037 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535786-8hnbm"] Feb 26 23:06:01 crc kubenswrapper[4910]: I0226 23:06:01.342226 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535786-8hnbm" event={"ID":"35b5c8f0-9aa2-493c-9011-b3d270326171","Type":"ContainerStarted","Data":"fd720fbf169e8b4ba93f7251d0f52c3eb84f0539ca64a3feb5a78e5c754128cc"} Feb 26 23:06:03 crc kubenswrapper[4910]: I0226 23:06:03.360006 4910 generic.go:334] "Generic (PLEG): container finished" podID="35b5c8f0-9aa2-493c-9011-b3d270326171" containerID="12b5399848e3017e3a5a7d02cab5d4b29d0d5fba4a2423e99b75cce3108e83f6" exitCode=0 Feb 26 23:06:03 crc kubenswrapper[4910]: I0226 23:06:03.360118 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535786-8hnbm" event={"ID":"35b5c8f0-9aa2-493c-9011-b3d270326171","Type":"ContainerDied","Data":"12b5399848e3017e3a5a7d02cab5d4b29d0d5fba4a2423e99b75cce3108e83f6"} Feb 26 23:06:04 crc kubenswrapper[4910]: I0226 23:06:04.928862 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535786-8hnbm" Feb 26 23:06:05 crc kubenswrapper[4910]: I0226 23:06:05.097548 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqmmm\" (UniqueName: \"kubernetes.io/projected/35b5c8f0-9aa2-493c-9011-b3d270326171-kube-api-access-tqmmm\") pod \"35b5c8f0-9aa2-493c-9011-b3d270326171\" (UID: \"35b5c8f0-9aa2-493c-9011-b3d270326171\") " Feb 26 23:06:05 crc kubenswrapper[4910]: I0226 23:06:05.104670 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b5c8f0-9aa2-493c-9011-b3d270326171-kube-api-access-tqmmm" (OuterVolumeSpecName: "kube-api-access-tqmmm") pod "35b5c8f0-9aa2-493c-9011-b3d270326171" (UID: "35b5c8f0-9aa2-493c-9011-b3d270326171"). InnerVolumeSpecName "kube-api-access-tqmmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:06:05 crc kubenswrapper[4910]: I0226 23:06:05.200976 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqmmm\" (UniqueName: \"kubernetes.io/projected/35b5c8f0-9aa2-493c-9011-b3d270326171-kube-api-access-tqmmm\") on node \"crc\" DevicePath \"\"" Feb 26 23:06:05 crc kubenswrapper[4910]: I0226 23:06:05.392083 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535786-8hnbm" event={"ID":"35b5c8f0-9aa2-493c-9011-b3d270326171","Type":"ContainerDied","Data":"fd720fbf169e8b4ba93f7251d0f52c3eb84f0539ca64a3feb5a78e5c754128cc"} Feb 26 23:06:05 crc kubenswrapper[4910]: I0226 23:06:05.392142 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd720fbf169e8b4ba93f7251d0f52c3eb84f0539ca64a3feb5a78e5c754128cc" Feb 26 23:06:05 crc kubenswrapper[4910]: I0226 23:06:05.392289 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535786-8hnbm" Feb 26 23:06:06 crc kubenswrapper[4910]: I0226 23:06:06.006901 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535780-t5swx"] Feb 26 23:06:06 crc kubenswrapper[4910]: I0226 23:06:06.020687 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535780-t5swx"] Feb 26 23:06:07 crc kubenswrapper[4910]: I0226 23:06:07.920775 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceaedc11-b346-4997-83ad-7b8f5e99c1d2" path="/var/lib/kubelet/pods/ceaedc11-b346-4997-83ad-7b8f5e99c1d2/volumes" Feb 26 23:06:18 crc kubenswrapper[4910]: I0226 23:06:18.854838 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xn8r"] Feb 26 23:06:18 crc kubenswrapper[4910]: E0226 23:06:18.856612 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b5c8f0-9aa2-493c-9011-b3d270326171" containerName="oc" Feb 26 23:06:18 crc kubenswrapper[4910]: I0226 23:06:18.856644 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b5c8f0-9aa2-493c-9011-b3d270326171" containerName="oc" Feb 26 23:06:18 crc kubenswrapper[4910]: I0226 23:06:18.857219 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b5c8f0-9aa2-493c-9011-b3d270326171" containerName="oc" Feb 26 23:06:18 crc kubenswrapper[4910]: I0226 23:06:18.862922 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:18 crc kubenswrapper[4910]: I0226 23:06:18.871771 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xn8r"] Feb 26 23:06:18 crc kubenswrapper[4910]: I0226 23:06:18.929490 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnbbv\" (UniqueName: \"kubernetes.io/projected/704548cb-39d0-4ad0-967d-c8344bfae0c3-kube-api-access-hnbbv\") pod \"redhat-marketplace-4xn8r\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:18 crc kubenswrapper[4910]: I0226 23:06:18.929640 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-utilities\") pod \"redhat-marketplace-4xn8r\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:18 crc kubenswrapper[4910]: I0226 23:06:18.929756 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-catalog-content\") pod \"redhat-marketplace-4xn8r\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:19 crc kubenswrapper[4910]: I0226 23:06:19.031552 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-utilities\") pod \"redhat-marketplace-4xn8r\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:19 crc kubenswrapper[4910]: I0226 23:06:19.031797 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-catalog-content\") pod \"redhat-marketplace-4xn8r\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:19 crc kubenswrapper[4910]: I0226 23:06:19.031979 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnbbv\" (UniqueName: \"kubernetes.io/projected/704548cb-39d0-4ad0-967d-c8344bfae0c3-kube-api-access-hnbbv\") pod \"redhat-marketplace-4xn8r\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:19 crc kubenswrapper[4910]: I0226 23:06:19.032632 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-utilities\") pod \"redhat-marketplace-4xn8r\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:19 crc kubenswrapper[4910]: I0226 23:06:19.032648 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-catalog-content\") pod \"redhat-marketplace-4xn8r\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:19 crc kubenswrapper[4910]: I0226 23:06:19.052204 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnbbv\" (UniqueName: \"kubernetes.io/projected/704548cb-39d0-4ad0-967d-c8344bfae0c3-kube-api-access-hnbbv\") pod \"redhat-marketplace-4xn8r\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:19 crc kubenswrapper[4910]: I0226 23:06:19.222299 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:19 crc kubenswrapper[4910]: W0226 23:06:19.691315 4910 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod704548cb_39d0_4ad0_967d_c8344bfae0c3.slice/crio-b29746f5c9916e1cf8d2f8c31935d13336af3190333fe408c7e2f6b498d91608 WatchSource:0}: Error finding container b29746f5c9916e1cf8d2f8c31935d13336af3190333fe408c7e2f6b498d91608: Status 404 returned error can't find the container with id b29746f5c9916e1cf8d2f8c31935d13336af3190333fe408c7e2f6b498d91608 Feb 26 23:06:19 crc kubenswrapper[4910]: I0226 23:06:19.697777 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xn8r"] Feb 26 23:06:20 crc kubenswrapper[4910]: I0226 23:06:20.586000 4910 generic.go:334] "Generic (PLEG): container finished" podID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerID="28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd" exitCode=0 Feb 26 23:06:20 crc kubenswrapper[4910]: I0226 23:06:20.586222 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xn8r" event={"ID":"704548cb-39d0-4ad0-967d-c8344bfae0c3","Type":"ContainerDied","Data":"28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd"} Feb 26 23:06:20 crc kubenswrapper[4910]: I0226 23:06:20.586348 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xn8r" event={"ID":"704548cb-39d0-4ad0-967d-c8344bfae0c3","Type":"ContainerStarted","Data":"b29746f5c9916e1cf8d2f8c31935d13336af3190333fe408c7e2f6b498d91608"} Feb 26 23:06:20 crc kubenswrapper[4910]: I0226 23:06:20.590485 4910 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 23:06:22 crc kubenswrapper[4910]: I0226 23:06:22.621085 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xn8r" event={"ID":"704548cb-39d0-4ad0-967d-c8344bfae0c3","Type":"ContainerStarted","Data":"78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5"} Feb 26 23:06:23 crc kubenswrapper[4910]: I0226 23:06:23.632284 4910 generic.go:334] "Generic (PLEG): container finished" podID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerID="78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5" exitCode=0 Feb 26 23:06:23 crc kubenswrapper[4910]: I0226 23:06:23.632334 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xn8r" event={"ID":"704548cb-39d0-4ad0-967d-c8344bfae0c3","Type":"ContainerDied","Data":"78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5"} Feb 26 23:06:24 crc kubenswrapper[4910]: I0226 23:06:24.643486 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xn8r" event={"ID":"704548cb-39d0-4ad0-967d-c8344bfae0c3","Type":"ContainerStarted","Data":"66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b"} Feb 26 23:06:24 crc kubenswrapper[4910]: I0226 23:06:24.666983 4910 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xn8r" podStartSLOduration=3.215948619 podStartE2EDuration="6.666963859s" podCreationTimestamp="2026-02-26 23:06:18 +0000 UTC" firstStartedPulling="2026-02-26 23:06:20.590028568 +0000 UTC m=+4265.669519149" lastFinishedPulling="2026-02-26 23:06:24.041043828 +0000 UTC m=+4269.120534389" observedRunningTime="2026-02-26 23:06:24.658743496 +0000 UTC m=+4269.738234037" watchObservedRunningTime="2026-02-26 23:06:24.666963859 +0000 UTC m=+4269.746454400" Feb 26 23:06:29 crc kubenswrapper[4910]: I0226 23:06:29.222465 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:29 crc kubenswrapper[4910]: I0226 23:06:29.223407 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:29 crc kubenswrapper[4910]: I0226 23:06:29.279504 4910 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:29 crc kubenswrapper[4910]: I0226 23:06:29.768956 4910 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:29 crc kubenswrapper[4910]: I0226 23:06:29.842465 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xn8r"] Feb 26 23:06:31 crc kubenswrapper[4910]: I0226 23:06:31.726171 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xn8r" podUID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerName="registry-server" containerID="cri-o://66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b" gracePeriod=2 Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.413214 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.553762 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnbbv\" (UniqueName: \"kubernetes.io/projected/704548cb-39d0-4ad0-967d-c8344bfae0c3-kube-api-access-hnbbv\") pod \"704548cb-39d0-4ad0-967d-c8344bfae0c3\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.553822 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-catalog-content\") pod \"704548cb-39d0-4ad0-967d-c8344bfae0c3\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.553879 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-utilities\") pod \"704548cb-39d0-4ad0-967d-c8344bfae0c3\" (UID: \"704548cb-39d0-4ad0-967d-c8344bfae0c3\") " Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.554876 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-utilities" (OuterVolumeSpecName: "utilities") pod "704548cb-39d0-4ad0-967d-c8344bfae0c3" (UID: "704548cb-39d0-4ad0-967d-c8344bfae0c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.568066 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/704548cb-39d0-4ad0-967d-c8344bfae0c3-kube-api-access-hnbbv" (OuterVolumeSpecName: "kube-api-access-hnbbv") pod "704548cb-39d0-4ad0-967d-c8344bfae0c3" (UID: "704548cb-39d0-4ad0-967d-c8344bfae0c3"). InnerVolumeSpecName "kube-api-access-hnbbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.656913 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnbbv\" (UniqueName: \"kubernetes.io/projected/704548cb-39d0-4ad0-967d-c8344bfae0c3-kube-api-access-hnbbv\") on node \"crc\" DevicePath \"\"" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.656948 4910 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.739915 4910 generic.go:334] "Generic (PLEG): container finished" podID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerID="66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b" exitCode=0 Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.739986 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xn8r" event={"ID":"704548cb-39d0-4ad0-967d-c8344bfae0c3","Type":"ContainerDied","Data":"66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b"} Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.740035 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xn8r" event={"ID":"704548cb-39d0-4ad0-967d-c8344bfae0c3","Type":"ContainerDied","Data":"b29746f5c9916e1cf8d2f8c31935d13336af3190333fe408c7e2f6b498d91608"} Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.740067 4910 scope.go:117] "RemoveContainer" containerID="66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.740386 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xn8r" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.769770 4910 scope.go:117] "RemoveContainer" containerID="78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.803064 4910 scope.go:117] "RemoveContainer" containerID="28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.820476 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "704548cb-39d0-4ad0-967d-c8344bfae0c3" (UID: "704548cb-39d0-4ad0-967d-c8344bfae0c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.860942 4910 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704548cb-39d0-4ad0-967d-c8344bfae0c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.878562 4910 scope.go:117] "RemoveContainer" containerID="66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b" Feb 26 23:06:33 crc kubenswrapper[4910]: E0226 23:06:32.880558 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b\": container with ID starting with 66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b not found: ID does not exist" containerID="66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.880628 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b"} err="failed to get container status \"66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b\": rpc error: code = NotFound desc = could not find container \"66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b\": container with ID starting with 66690e4b6620d96b9d84c147fdd476b21044a60d146d483487df7afd877e645b not found: ID does not exist" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.880664 4910 scope.go:117] "RemoveContainer" containerID="78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5" Feb 26 23:06:33 crc kubenswrapper[4910]: E0226 23:06:32.881343 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5\": container with ID starting with 78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5 not found: ID does not exist" containerID="78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.881372 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5"} err="failed to get container status \"78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5\": rpc error: code = NotFound desc = could not find container \"78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5\": container with ID starting with 78ea5c16e27fa1e61dd2fdb4dc8ec6be9956ab48ae331815e6288a906a19f8e5 not found: ID does not exist" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.881391 4910 scope.go:117] "RemoveContainer" containerID="28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd" Feb 26 23:06:33 crc kubenswrapper[4910]: E0226 23:06:32.885272 4910 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd\": container with ID starting with 28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd not found: ID does not exist" containerID="28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:32.885302 4910 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd"} err="failed to get container status \"28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd\": rpc error: code = NotFound desc = could not find container \"28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd\": container with ID starting with 28b442416a805c5394a0e45830ca8d337e8d6872efad6d94258a6ae198ff09fd not found: ID does not exist" Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:33.080860 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xn8r"] Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:33.106357 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xn8r"] Feb 26 23:06:33 crc kubenswrapper[4910]: I0226 23:06:33.925328 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="704548cb-39d0-4ad0-967d-c8344bfae0c3" path="/var/lib/kubelet/pods/704548cb-39d0-4ad0-967d-c8344bfae0c3/volumes" Feb 26 23:06:41 crc kubenswrapper[4910]: I0226 23:06:41.543546 4910 scope.go:117] "RemoveContainer" containerID="68118571a07131556745b8126880025677360a99bce4143ed48416fc56811982" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.165772 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535788-jgrg6"] Feb 26 23:08:00 crc kubenswrapper[4910]: E0226 23:08:00.166876 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerName="extract-content" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.166892 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerName="extract-content" Feb 26 23:08:00 crc kubenswrapper[4910]: E0226 23:08:00.166927 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerName="registry-server" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.166935 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerName="registry-server" Feb 26 23:08:00 crc kubenswrapper[4910]: E0226 23:08:00.166962 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerName="extract-utilities" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.166971 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerName="extract-utilities" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.167224 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="704548cb-39d0-4ad0-967d-c8344bfae0c3" containerName="registry-server" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.168137 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535788-jgrg6" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.170910 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.171256 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.171327 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.182241 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535788-jgrg6"] Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.257672 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zblw\" (UniqueName: \"kubernetes.io/projected/a6566e60-8b88-463a-8905-3e4fc17dfd40-kube-api-access-4zblw\") pod \"auto-csr-approver-29535788-jgrg6\" (UID: \"a6566e60-8b88-463a-8905-3e4fc17dfd40\") " pod="openshift-infra/auto-csr-approver-29535788-jgrg6" Feb 26 23:08:00 crc kubenswrapper[4910]: I0226 23:08:00.359598 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zblw\" (UniqueName: \"kubernetes.io/projected/a6566e60-8b88-463a-8905-3e4fc17dfd40-kube-api-access-4zblw\") pod \"auto-csr-approver-29535788-jgrg6\" (UID: \"a6566e60-8b88-463a-8905-3e4fc17dfd40\") " pod="openshift-infra/auto-csr-approver-29535788-jgrg6" Feb 26 23:08:01 crc kubenswrapper[4910]: I0226 23:08:01.003215 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zblw\" (UniqueName: \"kubernetes.io/projected/a6566e60-8b88-463a-8905-3e4fc17dfd40-kube-api-access-4zblw\") pod \"auto-csr-approver-29535788-jgrg6\" (UID: \"a6566e60-8b88-463a-8905-3e4fc17dfd40\") " pod="openshift-infra/auto-csr-approver-29535788-jgrg6" Feb 26 23:08:01 crc kubenswrapper[4910]: I0226 23:08:01.106008 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535788-jgrg6" Feb 26 23:08:01 crc kubenswrapper[4910]: I0226 23:08:01.672616 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535788-jgrg6"] Feb 26 23:08:01 crc kubenswrapper[4910]: I0226 23:08:01.791044 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535788-jgrg6" event={"ID":"a6566e60-8b88-463a-8905-3e4fc17dfd40","Type":"ContainerStarted","Data":"301546fdc9807557b288f16087fdbb087d3a69237c6cd349fbdb916805f9861e"} Feb 26 23:08:03 crc kubenswrapper[4910]: I0226 23:08:03.810888 4910 generic.go:334] "Generic (PLEG): container finished" podID="a6566e60-8b88-463a-8905-3e4fc17dfd40" containerID="79e16953f907c558ab648e27a359a97dec2d21c194b19aeccb65d2f89e5911d2" exitCode=0 Feb 26 23:08:03 crc kubenswrapper[4910]: I0226 23:08:03.810969 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535788-jgrg6" event={"ID":"a6566e60-8b88-463a-8905-3e4fc17dfd40","Type":"ContainerDied","Data":"79e16953f907c558ab648e27a359a97dec2d21c194b19aeccb65d2f89e5911d2"} Feb 26 23:08:05 crc kubenswrapper[4910]: I0226 23:08:05.331647 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535788-jgrg6" Feb 26 23:08:05 crc kubenswrapper[4910]: I0226 23:08:05.379179 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zblw\" (UniqueName: \"kubernetes.io/projected/a6566e60-8b88-463a-8905-3e4fc17dfd40-kube-api-access-4zblw\") pod \"a6566e60-8b88-463a-8905-3e4fc17dfd40\" (UID: \"a6566e60-8b88-463a-8905-3e4fc17dfd40\") " Feb 26 23:08:05 crc kubenswrapper[4910]: I0226 23:08:05.392308 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6566e60-8b88-463a-8905-3e4fc17dfd40-kube-api-access-4zblw" (OuterVolumeSpecName: "kube-api-access-4zblw") pod "a6566e60-8b88-463a-8905-3e4fc17dfd40" (UID: "a6566e60-8b88-463a-8905-3e4fc17dfd40"). InnerVolumeSpecName "kube-api-access-4zblw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:08:05 crc kubenswrapper[4910]: I0226 23:08:05.482196 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zblw\" (UniqueName: \"kubernetes.io/projected/a6566e60-8b88-463a-8905-3e4fc17dfd40-kube-api-access-4zblw\") on node \"crc\" DevicePath \"\"" Feb 26 23:08:05 crc kubenswrapper[4910]: I0226 23:08:05.842294 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535788-jgrg6" event={"ID":"a6566e60-8b88-463a-8905-3e4fc17dfd40","Type":"ContainerDied","Data":"301546fdc9807557b288f16087fdbb087d3a69237c6cd349fbdb916805f9861e"} Feb 26 23:08:05 crc kubenswrapper[4910]: I0226 23:08:05.842353 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="301546fdc9807557b288f16087fdbb087d3a69237c6cd349fbdb916805f9861e" Feb 26 23:08:05 crc kubenswrapper[4910]: I0226 23:08:05.842468 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535788-jgrg6" Feb 26 23:08:06 crc kubenswrapper[4910]: I0226 23:08:06.419119 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535782-f82dq"] Feb 26 23:08:06 crc kubenswrapper[4910]: I0226 23:08:06.440163 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535782-f82dq"] Feb 26 23:08:07 crc kubenswrapper[4910]: I0226 23:08:07.933477 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36706f6-732f-4956-8366-2e047b37f79b" path="/var/lib/kubelet/pods/d36706f6-732f-4956-8366-2e047b37f79b/volumes" Feb 26 23:08:25 crc kubenswrapper[4910]: I0226 23:08:25.727861 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 23:08:25 crc kubenswrapper[4910]: I0226 23:08:25.728651 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 23:08:40 crc kubenswrapper[4910]: I0226 23:08:40.791801 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f3994e44-ac9f-4f93-97cf-9ad02cdc61e6" containerName="galera" probeResult="failure" output="command timed out" Feb 26 23:08:40 crc kubenswrapper[4910]: I0226 23:08:40.791838 4910 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f3994e44-ac9f-4f93-97cf-9ad02cdc61e6" containerName="galera" probeResult="failure" output="command timed out" Feb 26 23:08:41 crc kubenswrapper[4910]: I0226 23:08:41.648087 4910 scope.go:117] "RemoveContainer" containerID="4d7025693ee43626144bf44ec4e5d6ce656e94fac4fe9ec67c0accc0dfd15304" Feb 26 23:08:55 crc kubenswrapper[4910]: I0226 23:08:55.727908 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 23:08:55 crc kubenswrapper[4910]: I0226 23:08:55.728609 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 23:09:25 crc kubenswrapper[4910]: I0226 23:09:25.727303 4910 patch_prober.go:28] interesting pod/machine-config-daemon-6xpv4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 23:09:25 crc kubenswrapper[4910]: I0226 23:09:25.727729 4910 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 23:09:25 crc kubenswrapper[4910]: I0226 23:09:25.727766 4910 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" Feb 26 23:09:25 crc kubenswrapper[4910]: I0226 23:09:25.728291 4910 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c8a253e3f4c6e196190b6a87e6234297c42977dcd1d3e36b9ba538a59034602"} pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 23:09:25 crc kubenswrapper[4910]: I0226 23:09:25.728341 4910 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" podUID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerName="machine-config-daemon" containerID="cri-o://4c8a253e3f4c6e196190b6a87e6234297c42977dcd1d3e36b9ba538a59034602" gracePeriod=600 Feb 26 23:09:25 crc kubenswrapper[4910]: I0226 23:09:25.897443 4910 generic.go:334] "Generic (PLEG): container finished" podID="69251a00-4e6e-48f6-ae1b-d3001d22b419" containerID="4c8a253e3f4c6e196190b6a87e6234297c42977dcd1d3e36b9ba538a59034602" exitCode=0 Feb 26 23:09:25 crc kubenswrapper[4910]: I0226 23:09:25.897491 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerDied","Data":"4c8a253e3f4c6e196190b6a87e6234297c42977dcd1d3e36b9ba538a59034602"} Feb 26 23:09:25 crc kubenswrapper[4910]: I0226 23:09:25.897522 4910 scope.go:117] "RemoveContainer" containerID="74b55005b167f0f42909b70be3c4a1d1c4a7d4427d71b588d549c4cf76e6204f" Feb 26 23:09:26 crc kubenswrapper[4910]: I0226 23:09:26.909246 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6xpv4" event={"ID":"69251a00-4e6e-48f6-ae1b-d3001d22b419","Type":"ContainerStarted","Data":"c8d48b9cb14cf1a9fe2267df268c13fb783c0e8f98389ce1fa4d4e83317a3278"} Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.148535 4910 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535790-lfkmk"] Feb 26 23:10:00 crc kubenswrapper[4910]: E0226 23:10:00.149531 4910 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6566e60-8b88-463a-8905-3e4fc17dfd40" containerName="oc" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.149546 4910 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6566e60-8b88-463a-8905-3e4fc17dfd40" containerName="oc" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.149822 4910 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6566e60-8b88-463a-8905-3e4fc17dfd40" containerName="oc" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.150768 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535790-lfkmk" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.153690 4910 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-trs4s" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.154806 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.161387 4910 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.162451 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535790-lfkmk"] Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.278775 4910 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdvh\" (UniqueName: \"kubernetes.io/projected/f25701c6-0f9b-4cf6-98ec-e87d7eb2698d-kube-api-access-2pdvh\") pod \"auto-csr-approver-29535790-lfkmk\" (UID: \"f25701c6-0f9b-4cf6-98ec-e87d7eb2698d\") " pod="openshift-infra/auto-csr-approver-29535790-lfkmk" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.381360 4910 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pdvh\" (UniqueName: \"kubernetes.io/projected/f25701c6-0f9b-4cf6-98ec-e87d7eb2698d-kube-api-access-2pdvh\") pod \"auto-csr-approver-29535790-lfkmk\" (UID: \"f25701c6-0f9b-4cf6-98ec-e87d7eb2698d\") " pod="openshift-infra/auto-csr-approver-29535790-lfkmk" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.414185 4910 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pdvh\" (UniqueName: \"kubernetes.io/projected/f25701c6-0f9b-4cf6-98ec-e87d7eb2698d-kube-api-access-2pdvh\") pod \"auto-csr-approver-29535790-lfkmk\" (UID: \"f25701c6-0f9b-4cf6-98ec-e87d7eb2698d\") " pod="openshift-infra/auto-csr-approver-29535790-lfkmk" Feb 26 23:10:00 crc kubenswrapper[4910]: I0226 23:10:00.486116 4910 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535790-lfkmk" Feb 26 23:10:01 crc kubenswrapper[4910]: I0226 23:10:01.072324 4910 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535790-lfkmk"] Feb 26 23:10:01 crc kubenswrapper[4910]: I0226 23:10:01.365283 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535790-lfkmk" event={"ID":"f25701c6-0f9b-4cf6-98ec-e87d7eb2698d","Type":"ContainerStarted","Data":"9db51deecbe9976519885033f67ec54d2d750ab4cf97a0e9170110f0b4b36985"} Feb 26 23:10:03 crc kubenswrapper[4910]: I0226 23:10:03.388562 4910 generic.go:334] "Generic (PLEG): container finished" podID="f25701c6-0f9b-4cf6-98ec-e87d7eb2698d" containerID="3d45752167a3094da636f10029a09a1128b1518f97d00a22981ca773ab4db1ab" exitCode=0 Feb 26 23:10:03 crc kubenswrapper[4910]: I0226 23:10:03.388651 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535790-lfkmk" event={"ID":"f25701c6-0f9b-4cf6-98ec-e87d7eb2698d","Type":"ContainerDied","Data":"3d45752167a3094da636f10029a09a1128b1518f97d00a22981ca773ab4db1ab"} Feb 26 23:10:05 crc kubenswrapper[4910]: I0226 23:10:05.122881 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535790-lfkmk" Feb 26 23:10:05 crc kubenswrapper[4910]: I0226 23:10:05.303325 4910 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pdvh\" (UniqueName: \"kubernetes.io/projected/f25701c6-0f9b-4cf6-98ec-e87d7eb2698d-kube-api-access-2pdvh\") pod \"f25701c6-0f9b-4cf6-98ec-e87d7eb2698d\" (UID: \"f25701c6-0f9b-4cf6-98ec-e87d7eb2698d\") " Feb 26 23:10:05 crc kubenswrapper[4910]: I0226 23:10:05.390844 4910 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25701c6-0f9b-4cf6-98ec-e87d7eb2698d-kube-api-access-2pdvh" (OuterVolumeSpecName: "kube-api-access-2pdvh") pod "f25701c6-0f9b-4cf6-98ec-e87d7eb2698d" (UID: "f25701c6-0f9b-4cf6-98ec-e87d7eb2698d"). InnerVolumeSpecName "kube-api-access-2pdvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 23:10:05 crc kubenswrapper[4910]: I0226 23:10:05.407034 4910 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pdvh\" (UniqueName: \"kubernetes.io/projected/f25701c6-0f9b-4cf6-98ec-e87d7eb2698d-kube-api-access-2pdvh\") on node \"crc\" DevicePath \"\"" Feb 26 23:10:05 crc kubenswrapper[4910]: I0226 23:10:05.413219 4910 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535790-lfkmk" event={"ID":"f25701c6-0f9b-4cf6-98ec-e87d7eb2698d","Type":"ContainerDied","Data":"9db51deecbe9976519885033f67ec54d2d750ab4cf97a0e9170110f0b4b36985"} Feb 26 23:10:05 crc kubenswrapper[4910]: I0226 23:10:05.413265 4910 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db51deecbe9976519885033f67ec54d2d750ab4cf97a0e9170110f0b4b36985" Feb 26 23:10:05 crc kubenswrapper[4910]: I0226 23:10:05.413332 4910 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535790-lfkmk" Feb 26 23:10:06 crc kubenswrapper[4910]: I0226 23:10:06.209117 4910 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535784-5gjvx"] Feb 26 23:10:06 crc kubenswrapper[4910]: I0226 23:10:06.218769 4910 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535784-5gjvx"] Feb 26 23:10:07 crc kubenswrapper[4910]: I0226 23:10:07.916609 4910 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d380cac-a80b-4742-97b3-6aeafb9e0052" path="/var/lib/kubelet/pods/9d380cac-a80b-4742-97b3-6aeafb9e0052/volumes"